對於低比特率高質量語音編碼的Transformer模型擴展

Scaling Transformers for Low-Bitrate High-Quality Speech Coding

November 29, 2024
作者: Julian D Parker, Anton Smirnov, Jordi Pons, CJ Carr, Zack Zukowski, Zach Evans, Xubo Liu
cs.AI

摘要

利用神經音頻編解碼模型對語音進行標記化是現代人工智慧流程中生成或理解語音的重要部分,無論是獨立應用還是在多模態情境中。傳統上,此類標記化模型專注於使用具有強歸納偏差的低參數架構。在本研究中,我們展示通過將具有大量參數的Transformer架構擴展到這個問題,並應用基於靈活有限標量量化(FSQ)的瓶頸,可以在極低的比特率(每秒400或700位元)下達到最先進的語音品質。訓練過的模型在客觀和主觀測試中明顯優於現有基準。
English
The tokenization of speech with neural audio codec models is a vital part of modern AI pipelines for the generation or understanding of speech, alone or in a multimodal context. Traditionally such tokenization models have concentrated on low parameter-count architectures using only components with strong inductive biases. In this work we show that by scaling a transformer architecture with large parameter count to this problem, and applying a flexible Finite Scalar Quantization (FSQ) based bottleneck, it is possible to reach state-of-the-art speech quality at extremely low bit-rates of 400 or 700 bits-per-second. The trained models strongly out-perform existing baselines in both objective and subjective tests.

Summary

AI-Generated Summary

PDF113December 2, 2024