通過擴散模型實現多樣且高效的音頻字幕化
Towards Diverse and Efficient Audio Captioning via Diffusion Models
September 14, 2024
作者: Manjie Xu, Chenxing Li, Xinyi Tu, Yong Ren, Ruibo Fu, Wei Liang, Dong Yu
cs.AI
摘要
我們介紹了基於擴散的音訊字幕生成(DAC),這是一種針對多樣化且高效的音訊字幕生成而設計的非自回歸擴散模型。儘管現有依賴語言骨幹的字幕生成模型在各種字幕生成任務中取得了顯著成功,但它們在生成速度和多樣性方面的表現不足阻礙了音訊理解和多媒體應用的進展。我們基於擴散的框架提供了獨特的優勢,源於其固有的隨機性和在字幕生成中的整體上下文建模。通過嚴格的評估,我們證明DAC不僅在字幕質量方面達到了與現有基準相比的最先進水平,而且在生成速度和多樣性方面明顯優於它們。DAC的成功表明,使用擴散骨幹,文本生成也可以與音訊和視覺生成任務無縫集成,為跨不同模態的統一音訊相關生成模型鋪平了道路。
English
We introduce Diffusion-based Audio Captioning (DAC), a non-autoregressive
diffusion model tailored for diverse and efficient audio captioning. Although
existing captioning models relying on language backbones have achieved
remarkable success in various captioning tasks, their insufficient performance
in terms of generation speed and diversity impede progress in audio
understanding and multimedia applications. Our diffusion-based framework offers
unique advantages stemming from its inherent stochasticity and holistic context
modeling in captioning. Through rigorous evaluation, we demonstrate that DAC
not only achieves SOTA performance levels compared to existing benchmarks in
the caption quality, but also significantly outperforms them in terms of
generation speed and diversity. The success of DAC illustrates that text
generation can also be seamlessly integrated with audio and visual generation
tasks using a diffusion backbone, paving the way for a unified, audio-related
generative model across different modalities.Summary
AI-Generated Summary