ChatPaper.aiChatPaper

EMO2:末端执行器引导的音频驱动化身视频生成

EMO2: End-Effector Guided Audio-Driven Avatar Video Generation

January 18, 2025
作者: Linrui Tian, Siqi Hu, Qi Wang, Bang Zhang, Liefeng Bo
cs.AI

摘要

本文提出了一种新颖的音频驱动的说话人方法,能够同时生成高度表现力丰富的面部表情和手势。与现有方法侧重生成全身或半身姿势不同,我们研究了共语手势生成的挑战,并确定了音频特征与全身手势之间的弱对应关系是一个关键限制。为了解决这个问题,我们将任务重新定义为一个两阶段过程。在第一阶段,我们直接从音频输入中生成手部姿势,利用音频信号与手部动作之间的强相关性。在第二阶段,我们采用扩散模型合成视频帧,结合第一阶段生成的手部姿势来产生逼真的面部表情和身体动作。我们的实验结果表明,所提出的方法在视觉质量和同步精度方面均优于最先进的方法,如CyberHost和Vlogger。这项工作为音频驱动手势生成提供了新视角,并为创建富有表现力和自然的说话人动画提供了一个稳健的框架。
English
In this paper, we propose a novel audio-driven talking head method capable of simultaneously generating highly expressive facial expressions and hand gestures. Unlike existing methods that focus on generating full-body or half-body poses, we investigate the challenges of co-speech gesture generation and identify the weak correspondence between audio features and full-body gestures as a key limitation. To address this, we redefine the task as a two-stage process. In the first stage, we generate hand poses directly from audio input, leveraging the strong correlation between audio signals and hand movements. In the second stage, we employ a diffusion model to synthesize video frames, incorporating the hand poses generated in the first stage to produce realistic facial expressions and body movements. Our experimental results demonstrate that the proposed method outperforms state-of-the-art approaches, such as CyberHost and Vlogger, in terms of both visual quality and synchronization accuracy. This work provides a new perspective on audio-driven gesture generation and a robust framework for creating expressive and natural talking head animations.

Summary

AI-Generated Summary

PDF124January 22, 2025