It's Not a Modality Gap: Characterizing and Addressing the Contrastive Gap

التفاصيل البيبلوغرافية
العنوان: It's Not a Modality Gap: Characterizing and Addressing the Contrastive Gap
المؤلفون: Fahim, Abrar, Murphy, Alex, Fyshe, Alona
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computer Vision and Pattern Recognition, Computer Science - Computation and Language, Computer Science - Information Retrieval, Computer Science - Machine Learning
الوصف: Multi-modal contrastive models such as CLIP achieve state-of-the-art performance in zero-shot classification by embedding input images and texts on a joint representational space. Recently, a modality gap has been reported in two-encoder contrastive models like CLIP, meaning that the image and text embeddings reside in disjoint areas of the latent space. Previous studies suggest that this gap exists due to 1) the cone effect, 2) mismatched pairs in the dataset, and 3) insufficient training. We show that, even when accounting for all these factors, and even when using the same modality, the contrastive loss actually creates a gap during training. As a result, We propose that the modality gap is inherent to the two-encoder contrastive loss and rename it the contrastive gap. We present evidence that attributes this contrastive gap to low uniformity in CLIP space, resulting in embeddings that occupy only a small portion of the latent space. To close the gap, we adapt the uniformity and alignment properties of unimodal contrastive loss to the multi-modal setting and show that simply adding these terms to the CLIP loss distributes the embeddings more uniformly in the representational space, closing the gap. In our experiments, we show that the modified representational space achieves better performance than default CLIP loss in downstream tasks such as zero-shot image classification and multi-modal arithmetic.
نوع الوثيقة: Working Paper
الوصول الحر: http://arxiv.org/abs/2405.18570Test
رقم الانضمام: edsarx.2405.18570
قاعدة البيانات: arXiv