ChatPaper.aiChatPaper

个性化多模态大型语言模型:综述

Personalized Multimodal Large Language Models: A Survey

December 3, 2024
作者: Junda Wu, Hanjia Lyu, Yu Xia, Zhehao Zhang, Joe Barrow, Ishita Kumar, Mehrnoosh Mirtaheri, Hongjie Chen, Ryan A. Rossi, Franck Dernoncourt, Tong Yu, Ruiyi Zhang, Jiuxiang Gu, Nesreen K. Ahmed, Yu Wang, Xiang Chen, Hanieh Deilamsalehy, Namyong Park, Sungchul Kim, Huanrui Yang, Subrata Mitra, Zhengmian Hu, Nedim Lipka, Dang Nguyen, Yue Zhao, Jiebo Luo, Julian McAuley
cs.AI

摘要

由于其最先进的性能和整合文本、图像和音频等多种数据模态以高准确性执行复杂任务的能力,多模态大型语言模型(MLLMs)变得日益重要。本文提供了个性化多模态大型语言模型的综合调查,重点关注它们的架构、训练方法和应用。我们提出了一个直观的分类法,用于对个性化MLLMs的个人化技术进行分类,并相应地讨论这些技术。此外,我们讨论了这些技术在适当时如何结合或调整,突出它们的优势和基本原理。我们还提供了现有研究中调查的个性化任务的简明总结,以及常用的评估指标。此外,我们总结了用于基准测试个性化MLLMs的数据集。最后,我们概述了关键的未解决挑战。本调查旨在为寻求理解和推动个性化多模态大型语言模型发展的研究人员和从业者提供宝贵资源。
English
Multimodal Large Language Models (MLLMs) have become increasingly important due to their state-of-the-art performance and ability to integrate multiple data modalities, such as text, images, and audio, to perform complex tasks with high accuracy. This paper presents a comprehensive survey on personalized multimodal large language models, focusing on their architecture, training methods, and applications. We propose an intuitive taxonomy for categorizing the techniques used to personalize MLLMs to individual users, and discuss the techniques accordingly. Furthermore, we discuss how such techniques can be combined or adapted when appropriate, highlighting their advantages and underlying rationale. We also provide a succinct summary of personalization tasks investigated in existing research, along with the evaluation metrics commonly used. Additionally, we summarize the datasets that are useful for benchmarking personalized MLLMs. Finally, we outline critical open challenges. This survey aims to serve as a valuable resource for researchers and practitioners seeking to understand and advance the development of personalized multimodal large language models.

Summary

AI-Generated Summary

PDF142December 6, 2024