ChatPaper.aiChatPaper

Maya:一种经过指令微调的多语言多模态模型

Maya: An Instruction Finetuned Multilingual Multimodal Model

December 10, 2024
作者: Nahid Alam, Karthik Reddy Kanjula, Surya Guthikonda, Timothy Chung, Bala Krishna S Vegesna, Abhipsha Das, Anthony Susevski, Ryan Sze-Yin Chan, S M Iftekhar Uddin, Shayekh Bin Islam, Roshan Santhosh, Snegha A, Drishti Sharma, Chen Liu, Isha Chaturvedi, Genta Indra Winata, Ashvanth. S, Snehanshu Mukherjee, Alham Fikri Aji
cs.AI

摘要

大规模视觉-语言模型(VLMs)的快速发展在学术基准测试中取得了令人印象深刻的成果,主要是在广泛使用的语言中。然而,目前的VLMs 在处理低资源语言和不同文化背景方面仍存在显著差距,主要是由于缺乏高质量、多样化和经过安全审核的数据。因此,这些模型通常难以理解低资源语言和文化细微差别,而且难以避免毒性。为了解决这些限制,我们引入了 Maya,这是一个开源的多模态多语言模型。我们的贡献有三个方面:1)一个基于 LLaVA 预训练数据集的八种语言的多语言图像-文本预训练数据集;2)对 LLaVA 数据集中毒性的彻底分析,随后创建了跨八种语言的新颖无毒版本;以及3)支持这些语言的多语言图像-文本模型,增强了在视觉-语言任务中的文化和语言理解能力。代码可在 https://github.com/nahidalam/maya 找到。
English
The rapid development of large Vision-Language Models (VLMs) has led to impressive results on academic benchmarks, primarily in widely spoken languages. However, significant gaps remain in the ability of current VLMs to handle low-resource languages and varied cultural contexts, largely due to a lack of high-quality, diverse, and safety-vetted data. Consequently, these models often struggle to understand low-resource languages and cultural nuances in a manner free from toxicity. To address these limitations, we introduce Maya, an open-source Multimodal Multilingual model. Our contributions are threefold: 1) a multilingual image-text pretraining dataset in eight languages, based on the LLaVA pretraining dataset; 2) a thorough analysis of toxicity within the LLaVA dataset, followed by the creation of a novel toxicity-free version across eight languages; and 3) a multilingual image-text model supporting these languages, enhancing cultural and linguistic comprehension in vision-language tasks. Code available at https://github.com/nahidalam/maya.

Summary

AI-Generated Summary

PDF292December 11, 2024