Towards Vision-Language Geo-Foundation Model: A Survey

التفاصيل البيبلوغرافية
العنوان: Towards Vision-Language Geo-Foundation Model: A Survey
المؤلفون: Zhou, Yue, Feng, Litong, Ke, Yiping, Jiang, Xue, Yan, Junchi, Yang, Xue, Zhang, Wayne
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computer Vision and Pattern Recognition
الوصف: Vision-Language Foundation Models (VLFMs) have made remarkable progress on various multimodal tasks, such as image captioning, image-text retrieval, visual question answering, and visual grounding. However, most methods rely on training with general image datasets, and the lack of geospatial data leads to poor performance on earth observation. Numerous geospatial image-text pair datasets and VLFMs fine-tuned on them have been proposed recently. These new approaches aim to leverage large-scale, multimodal geospatial data to build versatile intelligent models with diverse geo-perceptive capabilities, which we refer to as Vision-Language Geo-Foundation Models (VLGFMs). This paper thoroughly reviews VLGFMs, summarizing and analyzing recent developments in the field. In particular, we introduce the background and motivation behind the rise of VLGFMs, highlighting their unique research significance. Then, we systematically summarize the core technologies employed in VLGFMs, including data construction, model architectures, and applications of various multimodal geospatial tasks. Finally, we conclude with insights, issues, and discussions regarding future research directions. To the best of our knowledge, this is the first comprehensive literature review of VLGFMs. We keep tracing related works at https://github.com/zytx121/Awesome-VLGFMTest.
Comment: 18 pages, 4 figures
نوع الوثيقة: Working Paper
الوصول الحر: http://arxiv.org/abs/2406.09385Test
رقم الانضمام: edsarx.2406.09385
قاعدة البيانات: arXiv