SymDPO:通过符号演示直接偏好优化,增强大型多模态模型的上下文学习

SymDPO: Boosting In-Context Learning of Large Multimodal Models with Symbol Demonstration Direct Preference Optimization

November 17, 2024
作者: Hongrui Jia, Chaoya Jiang, Haiyang Xu, Wei Ye, Mengfan Dong, Ming Yan, Ji Zhang, Fei Huang, Shikun Zhang
cs.AI

摘要

随着语言模型的不断扩展,大型语言模型(LLMs)展现出在上下文学习(ICL)方面的新兴能力,使它们能够通过在上下文中添加少量上下文演示(ICDs)来解决语言任务。受到这些进展的启发,研究人员将这些技术扩展到具有ICL能力的大型多模态模型(LMMs)的开发中。然而,现有的LMMs面临一个关键问题:它们经常无法有效利用多模态演示中的视觉上下文,而是简单地遵循文本模式。这表明LMMs未能实现多模态演示和模型输出之间的有效对齐。为了解决这个问题,我们提出了符号演示直接优化偏好(SymDPO)。具体而言,SymDPO旨在打破传统范式,通过使用随机符号替换实例中的文本答案来构建多模态演示。这迫使模型仔细理解演示图像,并建立图像与符号之间的关系以正确回答问题。我们在多个基准测试中验证了这种方法的有效性,表明通过SymDPO,LMMs能够更有效地理解示例中的多模态上下文,并利用这一知识更好地回答问题。
English
As language models continue to scale, Large Language Models (LLMs) have exhibited emerging capabilities in In-Context Learning (ICL), enabling them to solve language tasks by prefixing a few in-context demonstrations (ICDs) as context. Inspired by these advancements, researchers have extended these techniques to develop Large Multimodal Models (LMMs) with ICL capabilities. However, existing LMMs face a critical issue: they often fail to effectively leverage the visual context in multimodal demonstrations and instead simply follow textual patterns. This indicates that LMMs do not achieve effective alignment between multimodal demonstrations and model outputs. To address this problem, we propose Symbol Demonstration Direct Preference Optimization (SymDPO). Specifically, SymDPO aims to break the traditional paradigm of constructing multimodal demonstrations by using random symbols to replace text answers within instances. This forces the model to carefully understand the demonstration images and establish a relationship between the images and the symbols to answer questions correctly. We validate the effectiveness of this method on multiple benchmarks, demonstrating that with SymDPO, LMMs can more effectively understand the multimodal context within examples and utilize this knowledge to answer questions better.

Summary

AI-Generated Summary

PDF113November 21, 2024