ChatPaper.aiChatPaper

一拍即说:单图生成全身语音化身

One Shot, One Talk: Whole-body Talking Avatar from a Single Image

December 2, 2024
作者: Jun Xiang, Yudong Guo, Leipeng Hu, Boyang Guo, Yancheng Yuan, Juyong Zhang
cs.AI

摘要

构建逼真且可动画化的化身仍然需要几分钟的多视角或单目自旋视频,而大多数方法缺乏对手势和表情的精确控制。为了突破这一界限,我们解决了从单个图像构建全身说话化身的挑战。我们提出了一种新颖的流程,解决了两个关键问题:1)复杂的动态建模和2)对新手势和表情的泛化。为了实现无缝泛化,我们利用最近的姿势引导图像到视频扩散模型,生成不完美的视频帧作为伪标签。为了克服由不一致和嘈杂的伪视频引起的动态建模挑战,我们引入了紧密耦合的3DGS-网格混合化身表示,并应用了几个关键的正则化方法,以减轻由不完美标签引起的不一致性。对多样主题进行的大量实验表明,我们的方法能够从单个图像创建出逼真、精确可动画化且富有表现力的全身说话化身。
English
Building realistic and animatable avatars still requires minutes of multi-view or monocular self-rotating videos, and most methods lack precise control over gestures and expressions. To push this boundary, we address the challenge of constructing a whole-body talking avatar from a single image. We propose a novel pipeline that tackles two critical issues: 1) complex dynamic modeling and 2) generalization to novel gestures and expressions. To achieve seamless generalization, we leverage recent pose-guided image-to-video diffusion models to generate imperfect video frames as pseudo-labels. To overcome the dynamic modeling challenge posed by inconsistent and noisy pseudo-videos, we introduce a tightly coupled 3DGS-mesh hybrid avatar representation and apply several key regularizations to mitigate inconsistencies caused by imperfect labels. Extensive experiments on diverse subjects demonstrate that our method enables the creation of a photorealistic, precisely animatable, and expressive whole-body talking avatar from just a single image.

Summary

AI-Generated Summary

PDF202December 5, 2024