ChatPaper.aiChatPaper

为低比特率高质量语音编码扩展Transformer

Scaling Transformers for Low-Bitrate High-Quality Speech Coding

November 29, 2024
作者: Julian D Parker, Anton Smirnov, Jordi Pons, CJ Carr, Zack Zukowski, Zach Evans, Xubo Liu
cs.AI

摘要

利用神经音频编解码模型对语音进行标记化是现代人工智能流水线中生成或理解语音的关键部分,无论是单独进行还是在多模态环境中。传统上,这种标记化模型专注于使用仅具有强归纳偏差的低参数计数架构。在这项工作中,我们展示通过将具有大参数计数的Transformer架构扩展到这个问题,并应用基于灵活的有限标量量化(FSQ)的瓶颈,可以在极低的比特率(每秒400或700比特)下实现最先进的语音质量。经过训练的模型在客观和主观测试中明显优于现有基准。
English
The tokenization of speech with neural audio codec models is a vital part of modern AI pipelines for the generation or understanding of speech, alone or in a multimodal context. Traditionally such tokenization models have concentrated on low parameter-count architectures using only components with strong inductive biases. In this work we show that by scaling a transformer architecture with large parameter count to this problem, and applying a flexible Finite Scalar Quantization (FSQ) based bottleneck, it is possible to reach state-of-the-art speech quality at extremely low bit-rates of 400 or 700 bits-per-second. The trained models strongly out-perform existing baselines in both objective and subjective tests.

Summary

AI-Generated Summary

PDF123December 2, 2024