多维洞察:在大型多模态模型中对真实世界个性化进行基准测试
Multi-Dimensional Insights: Benchmarking Real-World Personalization in Large Multimodal Models
December 17, 2024
作者: YiFan Zhang, Shanglin Lei, Runqi Qiao, Zhuoma GongQue, Xiaoshuai Song, Guanting Dong, Qiuna Tan, Zhe Wei, Peiqing Yang, Ye Tian, Yadong Xue, Xiaofei Wang, Honggang Zhang
cs.AI
摘要
大型多模态模型(LMMs)领域迅速发展,涌现出具有显著能力的多样化模型。然而,现有的基准测试未能全面、客观和准确地评估LMMs是否符合人类在现实场景中的多样化需求。为弥补这一差距,我们提出了多维洞察(MDI)基准测试,其中包括超过500张图像,涵盖人类生活中的六种常见场景。值得注意的是,MDI基准测试相对于现有评估具有两个重要优势:(1)每张图像都附有两种类型的问题:简单问题用于评估模型对图像的理解,复杂问题则用于评估模型分析和推理基本内容之外的能力。(2)鉴于不同年龄群体在面对相同场景时具有不同需求和观点,我们的基准测试将问题分成三个年龄类别:年轻人、中年人和老年人。这种设计允许对LMMs在满足不同年龄群体的偏好和需求方面进行详细评估。通过MDI基准测试,像GPT-4这样的强大模型在与年龄相关的任务上实现了79%的准确率,表明现有LMMs在解决现实应用中仍有相当大的改进空间。展望未来,我们预计MDI基准测试将为LMMs中的现实个性化需求开辟新的途径。MDI基准测试数据和评估代码可在https://mdi-benchmark.github.io/ 上获得。
English
The rapidly developing field of large multimodal models (LMMs) has led to the
emergence of diverse models with remarkable capabilities. However, existing
benchmarks fail to comprehensively, objectively and accurately evaluate whether
LMMs align with the diverse needs of humans in real-world scenarios. To bridge
this gap, we propose the Multi-Dimensional Insights (MDI) benchmark, which
includes over 500 images covering six common scenarios of human life. Notably,
the MDI-Benchmark offers two significant advantages over existing evaluations:
(1) Each image is accompanied by two types of questions: simple questions to
assess the model's understanding of the image, and complex questions to
evaluate the model's ability to analyze and reason beyond basic content. (2)
Recognizing that people of different age groups have varying needs and
perspectives when faced with the same scenario, our benchmark stratifies
questions into three age categories: young people, middle-aged people, and
older people. This design allows for a detailed assessment of LMMs'
capabilities in meeting the preferences and needs of different age groups. With
MDI-Benchmark, the strong model like GPT-4o achieve 79% accuracy on age-related
tasks, indicating that existing LMMs still have considerable room for
improvement in addressing real-world applications. Looking ahead, we anticipate
that the MDI-Benchmark will open new pathways for aligning real-world
personalization in LMMs. The MDI-Benchmark data and evaluation code are
available at https://mdi-benchmark.github.io/Summary
AI-Generated Summary