MME-CoT:在大型多模态模型中对思维链进行基准测试,以评估推理质量、鲁棒性和效率。
MME-CoT: Benchmarking Chain-of-Thought in Large Multimodal Models for Reasoning Quality, Robustness, and Efficiency
February 13, 2025
作者: Dongzhi Jiang, Renrui Zhang, Ziyu Guo, Yanwei Li, Yu Qi, Xinyan Chen, Liuhui Wang, Jianhan Jin, Claire Guo, Shen Yan, Bo Zhang, Chaoyou Fu, Peng Gao, Hongsheng Li
cs.AI
摘要
通过思维链(Chain-of-Thought,CoT)回答问题显著增强了大型语言模型(Large Language Models,LLMs)的推理能力,然而它对大型多模态模型(Large Multimodal Models,LMMs)的影响仍缺乏系统评估和深入调查。在本文中,我们介绍了MME-CoT,一个专门评估LMMs的CoT推理性能的基准,涵盖六个领域:数学、科学、OCR、逻辑、时空和一般场景。作为该领域的首个全面研究,我们提出了一个全面的评估套件,包括三个新颖的度量标准,评估推理质量、鲁棒性和效率的细粒度水平。利用精心筛选的高质量数据和独特的评估策略,我们对最先进的LMMs进行了深入分析,揭示了几个关键见解:1)具有反思机制的模型展现出卓越的CoT质量,Kimi k1.5胜过GPT-4o,展现出最高质量的结果;2)CoT提示常常降低LMM在侧重感知任务上的表现,暗示可能存在有害的过度思考行为;以及3)尽管CoT质量高,具有反思的LMMs在正常响应和自我纠正阶段均表现出显著的低效性。我们希望MME-CoT能够成为推进LMMs多模态推理的基础。项目页面:https://mmecot.github.io/
English
Answering questions with Chain-of-Thought (CoT) has significantly enhanced
the reasoning capabilities of Large Language Models (LLMs), yet its impact on
Large Multimodal Models (LMMs) still lacks a systematic assessment and in-depth
investigation. In this paper, we introduce MME-CoT, a specialized benchmark
evaluating the CoT reasoning performance of LMMs, spanning six domains: math,
science, OCR, logic, space-time, and general scenes. As the first comprehensive
study in this area, we propose a thorough evaluation suite incorporating three
novel metrics that assess the reasoning quality, robustness, and efficiency at
a fine-grained level. Leveraging curated high-quality data and a unique
evaluation strategy, we conduct an in-depth analysis of state-of-the-art LMMs,
uncovering several key insights: 1) Models with reflection mechanism
demonstrate a superior CoT quality, with Kimi k1.5 outperforming GPT-4o and
demonstrating the highest quality results; 2) CoT prompting often degrades LMM
performance on perception-heavy tasks, suggesting a potentially harmful
overthinking behavior; and 3) Although the CoT quality is high, LMMs with
reflection exhibit significant inefficiency in both normal response and
self-correction phases. We hope MME-CoT serves as a foundation for advancing
multimodal reasoning in LMMs. Project Page: https://mmecot.github.io/Summary
AI-Generated Summary