Maya:一個經過指令微調的多語言多模型模型
Maya: An Instruction Finetuned Multilingual Multimodal Model
December 10, 2024
作者: Nahid Alam, Karthik Reddy Kanjula, Surya Guthikonda, Timothy Chung, Bala Krishna S Vegesna, Abhipsha Das, Anthony Susevski, Ryan Sze-Yin Chan, S M Iftekhar Uddin, Shayekh Bin Islam, Roshan Santhosh, Snegha A, Drishti Sharma, Chen Liu, Isha Chaturvedi, Genta Indra Winata, Ashvanth. S, Snehanshu Mukherjee, Alham Fikri Aji
cs.AI
摘要
大視覺語言模型(VLMs)的快速發展在學術基準測試中取得了令人印象深刻的成果,主要是在廣泛使用的語言中。然而,目前的 VLMs 在處理低資源語言和不同文化背景方面仍存在顯著差距,這主要是由於缺乏高質量、多樣性和經過安全審核的數據。因此,這些模型通常難以從毒性中自由地理解低資源語言和文化細微差異。為了解決這些限制,我們介紹了 Maya,一個開源的多模態多語言模型。我們的貢獻有三個方面:1)基於 LLaVA 預訓練數據集,在八種語言中提供了一個多語言圖像文本預訓練數據集;2)對 LLaVA 數據集中的毒性進行了深入分析,然後創建了一個跨八種語言的新型無毒版本;以及 3)支持這些語言的多語言圖像文本模型,增強了在視覺語言任務中的文化和語言理解。代碼可在 https://github.com/nahidalam/maya 找到。
English
The rapid development of large Vision-Language Models (VLMs) has led to
impressive results on academic benchmarks, primarily in widely spoken
languages. However, significant gaps remain in the ability of current VLMs to
handle low-resource languages and varied cultural contexts, largely due to a
lack of high-quality, diverse, and safety-vetted data. Consequently, these
models often struggle to understand low-resource languages and cultural nuances
in a manner free from toxicity. To address these limitations, we introduce
Maya, an open-source Multimodal Multilingual model. Our contributions are
threefold: 1) a multilingual image-text pretraining dataset in eight languages,
based on the LLaVA pretraining dataset; 2) a thorough analysis of toxicity
within the LLaVA dataset, followed by the creation of a novel toxicity-free
version across eight languages; and 3) a multilingual image-text model
supporting these languages, enhancing cultural and linguistic comprehension in
vision-language tasks. Code available at https://github.com/nahidalam/maya.Summary
AI-Generated Summary