ChatPaper.aiChatPaper

多模式大型語言模型上的視覺問題分解

Visual Question Decomposition on Multimodal Large Language Models

September 28, 2024
作者: Haowei Zhang, Jianzhe Liu, Zhen Han, Shuo Chen, Bailan He, Volker Tresp, Zhiqiang Xu, Jindong Gu
cs.AI

摘要

問題分解已經成為促使大型語言模型 (LLMs) 回答複雜問題的有效策略。然而,現有方法主要集中在單模式語言模型,尚未探索多模式大型語言模型 (MLLMs) 的問題分解能力。因此,本文探討了在MLLMs上的視覺問題分解。具體來說,我們引入了一個系統性評估框架,包括一個數據集和幾個評估標準,以評估分解後子問題的質量,揭示現有的MLLMs難以生成高質量的子問題。為了解決這一限制,我們提出了一個特定的微調數據集DecoVQA+,以增強模型的問題分解能力。為了使模型能夠執行適當的選擇性分解,我們提出了一個高效的微調流程。微調流程包括我們提出的數據集和一個用於選擇性分解的訓練目標。經過微調的MLLMs在子問題的質量和選擇性問題分解政策方面取得了顯著的改善。此外,這些模型在VQA基準數據集上實現了更高的準確性。
English
Question decomposition has emerged as an effective strategy for prompting Large Language Models (LLMs) to answer complex questions. However, while existing methods primarily focus on unimodal language models, the question decomposition capability of Multimodal Large Language Models (MLLMs) has yet to be explored. To this end, this paper explores visual question decomposition on MLLMs. Specifically, we introduce a systematic evaluation framework including a dataset and several evaluation criteria to assess the quality of the decomposed sub-questions, revealing that existing MLLMs struggle to produce high-quality sub-questions. To address this limitation, we propose a specific finetuning dataset, DecoVQA+, for enhancing the model's question decomposition capability. Aiming at enabling models to perform appropriate selective decomposition, we propose an efficient finetuning pipeline. The finetuning pipeline consists of our proposed dataset and a training objective for selective decomposition. Finetuned MLLMs demonstrate significant improvements in the quality of sub-questions and the policy of selective question decomposition. Additionally, the models also achieve higher accuracy with selective decomposition on VQA benchmark datasets.

Summary

AI-Generated Summary

PDF92November 13, 2024