ChatPaper.aiChatPaper

SIFT-50M:一個用於語音指令微調的大規模多語言數據集

SIFT-50M: A Large-Scale Multilingual Dataset for Speech Instruction Fine-Tuning

April 12, 2025
作者: Prabhat Pandey, Rupak Vignesh Swaminathan, K V Vijay Girish, Arunasish Sen, Jian Xie, Grant P. Strimel, Andreas Schwarz
cs.AI

摘要

我們推出了SIFT(語音指令微調)數據集,這是一個包含5000萬個樣本的數據集,專為語音-文本大語言模型(LLMs)的指令微調與預訓練而設計。SIFT-50M基於公開可用的語音語料庫構建,這些語料庫總計包含14,000小時的語音,並結合了LLMs及現成的專家模型。該數據集涵蓋五種語言,包含多樣化的語音理解及可控語音生成指令。利用SIFT-50M,我們訓練了SIFT-LLM,其在指令跟蹤基準測試中超越了現有的語音-文本LLMs,同時在基礎語音任務上表現出競爭力。為了支持進一步研究,我們還引入了EvalSIFT,這是一個專門設計用於評估語音-文本LLMs指令跟蹤能力的基準數據集。
English
We introduce SIFT (Speech Instruction Fine-Tuning), a 50M-example dataset designed for instruction fine-tuning and pre-training of speech-text large language models (LLMs). SIFT-50M is built from publicly available speech corpora, which collectively contain 14K hours of speech, and leverages LLMs along with off-the-shelf expert models. The dataset spans five languages, encompassing a diverse range of speech understanding as well as controllable speech generation instructions. Using SIFT-50M, we train SIFT-LLM, which outperforms existing speech-text LLMs on instruction-following benchmarks while achieving competitive performance on foundational speech tasks. To support further research, we also introduce EvalSIFT, a benchmark dataset specifically designed to evaluate the instruction-following capabilities of speech-text LLMs.

Summary

AI-Generated Summary

PDF152April 17, 2025