ChatPaper.aiChatPaper

文本和图像均泄露!多模态LLM数据污染的系统分析

Both Text and Images Leaked! A Systematic Analysis of Multimodal LLM Data Contamination

November 6, 2024
作者: Dingjie Song, Sicheng Lai, Shunian Chen, Lichao Sun, Benyou Wang
cs.AI

摘要

多模态大型语言模型(MLLMs)的快速发展在各种多模态基准测试中展现出优越的性能。然而,在训练过程中数据污染的问题给性能评估和比较带来了挑战。虽然存在许多用于检测大型语言模型(LLMs)中数据集污染的方法,但由于多模态和多个训练阶段,这些方法对MLLMs的效果较差。在本研究中,我们引入了一个专为MLLMs设计的多模态数据污染检测框架MM-Detect。我们的实验结果表明,MM-Detect对不同程度的污染敏感,并且能够突出由于多模态基准测试的训练集泄漏而导致的显著性能改善。此外,我们还探讨了污染可能源自MLLMs使用的LLMs的预训练阶段以及MLLMs的微调阶段,为污染可能引入的阶段提供了新的见解。
English
The rapid progression of multimodal large language models (MLLMs) has demonstrated superior performance on various multimodal benchmarks. However, the issue of data contamination during training creates challenges in performance evaluation and comparison. While numerous methods exist for detecting dataset contamination in large language models (LLMs), they are less effective for MLLMs due to their various modalities and multiple training phases. In this study, we introduce a multimodal data contamination detection framework, MM-Detect, designed for MLLMs. Our experimental results indicate that MM-Detect is sensitive to varying degrees of contamination and can highlight significant performance improvements due to leakage of the training set of multimodal benchmarks. Furthermore, We also explore the possibility of contamination originating from the pre-training phase of LLMs used by MLLMs and the fine-tuning phase of MLLMs, offering new insights into the stages at which contamination may be introduced.

Summary

AI-Generated Summary

PDF492November 13, 2024