LLM2CLIP:强大的语言模型拓展更丰富的视觉表征
LLM2CLIP: Powerful Language Model Unlock Richer Visual Representation
November 7, 2024
作者: Weiquan Huang, Aoqi Wu, Yifan Yang, Xufang Luo, Yuqing Yang, Liang Hu, Qi Dai, Xiyang Dai, Dongdong Chen, Chong Luo, Lili Qiu
cs.AI
摘要
CLIP是当今最重要的多模态基础模型之一。CLIP的能力源自何处?自然语言提供的丰富监督信号塑造了强大的跨模态表示空间,自然语言是人类知识的载体。然而,随着大型语言模型(LLMs)如GPT-4和LLaMA的快速发展,语言理解和生成的边界不断被拓展。这带来了一个有趣的问题:LLMs的能力能否被利用来进一步改进多模态表示学习?将LLMs纳入CLIP中的潜在好处是显而易见的。LLMs强大的文本理解能力可以从根本上改善CLIP处理图像标题的能力,极大地增强其处理长篇复杂文本的能力,这是普通CLIP已知的局限。此外,LLMs是在大量文本语料库上训练的,具有开放世界知识。这使它们能够在训练过程中扩展标题信息,提高学习过程的效率。在本文中,我们提出了LLM2CLIP,这是一种拥抱LLMs力量来释放CLIP潜力的新方法。通过在对比学习中在标题空间中微调LLM,我们将其文本能力提取到输出嵌入中,显著提高了输出层的文本可辨识性。然后,我们设计了一个高效的训练过程,其中经过微调的LLM充当CLIP视觉编码器的强大教师。由于LLM的存在,我们现在可以在不受普通CLIP文本编码器上下文窗口和能力限制的约束下,纳入更长、更复杂的标题。我们的实验表明,这种方法在跨模态任务中带来了实质性的改进。
English
CLIP is one of the most important multimodal foundational models today. What
powers CLIP's capabilities? The rich supervision signals provided by natural
language, the carrier of human knowledge, shape a powerful cross-modal
representation space. However, with the rapid advancements in large language
models LLMs like GPT-4 and LLaMA, the boundaries of language comprehension and
generation are continually being pushed. This raises an intriguing question:
can the capabilities of LLMs be harnessed to further improve multimodal
representation learning? The potential benefits of incorporating LLMs into CLIP
are clear. LLMs' strong textual understanding can fundamentally improve CLIP's
ability to handle image captions, drastically enhancing its ability to process
long and complex texts, a well-known limitation of vanilla CLIP. Moreover, LLMs
are trained on a vast corpus of text, possessing open-world knowledge. This
allows them to expand on caption information during training, increasing the
efficiency of the learning process. In this paper, we propose LLM2CLIP, a novel
approach that embraces the power of LLMs to unlock CLIP's potential. By
fine-tuning the LLM in the caption space with contrastive learning, we extract
its textual capabilities into the output embeddings, significantly improving
the output layer's textual discriminability. We then design an efficient
training process where the fine-tuned LLM acts as a powerful teacher for CLIP's
visual encoder. Thanks to the LLM's presence, we can now incorporate longer and
more complex captions without being restricted by vanilla CLIP's text encoder's
context window and ability limitations. Our experiments demonstrate that this
approach brings substantial improvements in cross-modal tasks.Summary
AI-Generated Summary