ChatPaper.aiChatPaper

OpenAI Whisper模型的量化方法:对比分析

Quantization for OpenAI's Whisper Models: A Comparative Analysis

March 12, 2025
作者: Allison Andreyev
cs.AI

摘要

自动语音识别(ASR)模型在字幕生成、语音翻译及实时转录等应用中日益凸显其重要性。本文研究了Whisper及其两种变体模型:一种针对实时语音流优化,另一种则专为离线转录设计。值得注意的是,这些模型被发现会产生幻觉内容,降低了转录的可靠性。此外,更大规模的模型变体表现出更高的延迟,对资源受限设备的部署提出了挑战。本研究分析了三种Whisper模型之间的异同,定性探讨了它们各自的能力特点。随后,量化了模型量化对延迟的影响,并评估了其在边缘设备部署中的可行性。利用开源LibriSpeech数据集,本文评估了whispercpp在三种量化方法(INT4、INT5、INT8)下的词错误率(WER)及延迟分析。结果显示,量化使延迟减少了19%,模型大小缩减了45%,同时保持了转录的准确性。这些发现为不同Whisper模型的最佳使用场景及边缘设备部署的可能性提供了洞见。所有代码、数据集及实现细节均公开于GitHub仓库:https://github.com/allisonandreyev/WhisperQuantization.git。
English
Automated speech recognition (ASR) models have gained prominence for applications such as captioning, speech translation, and live transcription. This paper studies Whisper and two model variants: one optimized for live speech streaming and another for offline transcription. Notably, these models have been found to generate hallucinated content, reducing transcription reliability. Furthermore, larger model variants exhibit increased latency and pose challenges for deployment on resource-constrained devices. This study analyzes the similarities and differences between three Whisper models, qualitatively examining their distinct capabilities. Next, this study quantifies the impact of model quantization on latency and evaluates its viability for edge deployment. Using the open source LibriSpeech dataset, this paper evaluates the word error rate (WER) along with latency analysis of whispercpp using 3 quantization methods (INT4, INT5, INT8). Results show that quantization reduces latency by 19\% and model size by 45\%, while preserving transcription accuracy. These findings provide insights into the optimal use cases of different Whisper models and edge device deployment possibilities. All code, datasets, and implementation details are available in a public GitHub repository: https://github.com/allisonandreyev/WhisperQuantization.git

Summary

AI-Generated Summary

PDF51March 14, 2025