ChatPaper.aiChatPaper

MIT-10M:一个大规模的多语言图像翻译并行语料库

MIT-10M: A Large Scale Parallel Corpus of Multilingual Image Translation

December 10, 2024
作者: Bo Li, Shaolin Zhu, Lijie Wen
cs.AI

摘要

图像翻译(IT)在各个领域具有巨大潜力,可以将图像中的文本内容翻译成各种语言。然而,现有数据集往往存在规模、多样性和质量方面的限制,制约了IT模型的开发和评估。为解决这一问题,我们引入了MIT-10M,这是一个大规模的多语言图像翻译平行语料库,包含超过1000万个图像文本对,源自真实数据,经过了大量数据清洗和多语言翻译验证。它包含了三种尺寸的84万张图像,28个类别任务,三个难度级别以及14种语言的图像文本对,这在现有数据集的基础上有了显著改进。我们进行了大量实验,评估和训练模型在MIT-10M上的表现。实验结果明确表明,我们的数据集在评估模型在现实世界中处理具有挑战性和复杂性的图像翻译任务时具有更高的适应性。此外,通过MIT-10M微调的模型性能相比基准模型提高了三倍,进一步证实了其优越性。
English
Image Translation (IT) holds immense potential across diverse domains, enabling the translation of textual content within images into various languages. However, existing datasets often suffer from limitations in scale, diversity, and quality, hindering the development and evaluation of IT models. To address this issue, we introduce MIT-10M, a large-scale parallel corpus of multilingual image translation with over 10M image-text pairs derived from real-world data, which has undergone extensive data cleaning and multilingual translation validation. It contains 840K images in three sizes, 28 categories, tasks with three levels of difficulty and 14 languages image-text pairs, which is a considerable improvement on existing datasets. We conduct extensive experiments to evaluate and train models on MIT-10M. The experimental results clearly indicate that our dataset has higher adaptability when it comes to evaluating the performance of the models in tackling challenging and complex image translation tasks in the real world. Moreover, the performance of the model fine-tuned with MIT-10M has tripled compared to the baseline model, further confirming its superiority.

Summary

AI-Generated Summary

PDF52December 12, 2024