ChatPaper.aiChatPaper

LiteASR:基于低秩近似的高效自动语音识别

LiteASR: Efficient Automatic Speech Recognition with Low-Rank Approximation

February 27, 2025
作者: Keisuke Kamahori, Jungo Kasai, Noriyuki Kojima, Baris Kasikci
cs.AI

摘要

现代自动语音识别(ASR)模型,如OpenAI的Whisper,依赖于深度编码器-解码器架构,其编码器由于计算强度高,成为高效部署的关键瓶颈。我们推出了LiteASR,一种针对ASR编码器的低秩压缩方案,在保持转录准确性的同时显著降低了推理成本。我们的方法利用了中间激活中观察到的强低秩特性:通过使用一个小型校准数据集进行主成分分析(PCA),我们以一系列低秩矩阵乘法近似线性变换,并进一步优化自注意力机制以在降维空间中工作。评估结果表明,我们的方法能够将Whisper large-v3的编码器尺寸压缩超过50%,在达到Whisper medium尺寸的同时提供更优的转录准确性,从而在效率与性能之间确立了新的帕累托最优前沿。LiteASR的代码可在https://github.com/efeslab/LiteASR获取。
English
Modern automatic speech recognition (ASR) models, such as OpenAI's Whisper, rely on deep encoder-decoder architectures, and their encoders are a critical bottleneck for efficient deployment due to high computational intensity. We introduce LiteASR, a low-rank compression scheme for ASR encoders that significantly reduces inference costs while maintaining transcription accuracy. Our approach leverages the strong low-rank properties observed in intermediate activations: by applying principal component analysis (PCA) with a small calibration dataset, we approximate linear transformations with a chain of low-rank matrix multiplications, and further optimize self-attention to work in the reduced dimension. Evaluation results show that our method can compress Whisper large-v3's encoder size by over 50%, matching Whisper medium's size with better transcription accuracy, thereby establishing a new Pareto-optimal frontier of efficiency and performance. The code of LiteASR is available at https://github.com/efeslab/LiteASR.

Summary

AI-Generated Summary

PDF112March 3, 2025