多維度洞察:在大型多模型中對真實世界個性化進行基準測試
Multi-Dimensional Insights: Benchmarking Real-World Personalization in Large Multimodal Models
December 17, 2024
作者: YiFan Zhang, Shanglin Lei, Runqi Qiao, Zhuoma GongQue, Xiaoshuai Song, Guanting Dong, Qiuna Tan, Zhe Wei, Peiqing Yang, Ye Tian, Yadong Xue, Xiaofei Wang, Honggang Zhang
cs.AI
摘要
在快速發展的大型多模型(LMMs)領域中,出現了具有顯著能力的多樣模型。然而,現有的基準測試未能全面、客觀且準確地評估LMMs是否符合人類在現實場景中的多樣需求。為彌補這一差距,我們提出了多維洞察(MDI)基準測試,其中包括超過500張圖像,涵蓋人類生活中的六種常見場景。值得注意的是,MDI基準測試相對現有評估具有兩個重要優勢:(1)每張圖像附帶兩種類型的問題:簡單問題用於評估模型對圖像的理解,複雜問題則用於評估模型分析和推理基本內容之外的能力。(2)我們認識到不同年齡群體在面對相同場景時有不同的需求和觀點,因此我們的基準測試將問題分為三個年齡類別:年輕人、中年人和老年人。這種設計允許對LMMs在滿足不同年齡群體的偏好和需求方面進行詳細評估。通過MDI基準測試,像GPT-4這樣的強大模型在與年齡相關的任務上實現了79%的準確性,這表明現有的LMMs在應對現實應用方面仍有相當大的改進空間。展望未來,我們預期MDI基準測試將開辟新途徑,實現LMMs中的現實個性化。MDI基準測試數據和評估代碼可在https://mdi-benchmark.github.io/ 上獲得。
English
The rapidly developing field of large multimodal models (LMMs) has led to the
emergence of diverse models with remarkable capabilities. However, existing
benchmarks fail to comprehensively, objectively and accurately evaluate whether
LMMs align with the diverse needs of humans in real-world scenarios. To bridge
this gap, we propose the Multi-Dimensional Insights (MDI) benchmark, which
includes over 500 images covering six common scenarios of human life. Notably,
the MDI-Benchmark offers two significant advantages over existing evaluations:
(1) Each image is accompanied by two types of questions: simple questions to
assess the model's understanding of the image, and complex questions to
evaluate the model's ability to analyze and reason beyond basic content. (2)
Recognizing that people of different age groups have varying needs and
perspectives when faced with the same scenario, our benchmark stratifies
questions into three age categories: young people, middle-aged people, and
older people. This design allows for a detailed assessment of LMMs'
capabilities in meeting the preferences and needs of different age groups. With
MDI-Benchmark, the strong model like GPT-4o achieve 79% accuracy on age-related
tasks, indicating that existing LMMs still have considerable room for
improvement in addressing real-world applications. Looking ahead, we anticipate
that the MDI-Benchmark will open new pathways for aligning real-world
personalization in LMMs. The MDI-Benchmark data and evaluation code are
available at https://mdi-benchmark.github.io/Summary
AI-Generated Summary