浮动:用于音频驱动的说话肖像的生成式运动潜流匹配
FLOAT: Generative Motion Latent Flow Matching for Audio-driven Talking Portrait
December 2, 2024
作者: Taekyung Ki, Dongchan Min, Gyoungsu Chae
cs.AI
摘要
随着基于扩散的生成模型的快速发展,肖像图像动画取得了显著成果。然而,由于其迭代采样的特性,它仍然面临着在时间上保持一致的视频生成和快速采样方面的挑战。本文提出了一种名为FLOAT的音频驱动的说话肖像视频生成方法,基于流匹配生成模型。我们将生成建模从基于像素的潜在空间转移到学习到的运动潜在空间,实现了有效设计时间上一致的运动。为了实现这一点,我们引入了基于Transformer的矢量场预测器,具有简单而有效的逐帧调节机制。此外,我们的方法支持由语音驱动的情感增强,实现了表现力运动的自然融合。大量实验证明,我们的方法在视觉质量、运动保真度和效率方面优于最先进的音频驱动说话肖像方法。
English
With the rapid advancement of diffusion-based generative models, portrait
image animation has achieved remarkable results. However, it still faces
challenges in temporally consistent video generation and fast sampling due to
its iterative sampling nature. This paper presents FLOAT, an audio-driven
talking portrait video generation method based on flow matching generative
model. We shift the generative modeling from the pixel-based latent space to a
learned motion latent space, enabling efficient design of temporally consistent
motion. To achieve this, we introduce a transformer-based vector field
predictor with a simple yet effective frame-wise conditioning mechanism.
Additionally, our method supports speech-driven emotion enhancement, enabling a
natural incorporation of expressive motions. Extensive experiments demonstrate
that our method outperforms state-of-the-art audio-driven talking portrait
methods in terms of visual quality, motion fidelity, and efficiency.Summary
AI-Generated Summary