MagicFace:具有动作单元控制的高保真面部表情编辑
MagicFace: High-Fidelity Facial Expression Editing with Action-Unit Control
January 4, 2025
作者: Mengting Wei, Tuomas Varanka, Xingxun Jiang, Huai-Qian Khor, Guoying Zhao
cs.AI
摘要
我们解决了通过控制同一人的面部动作单元(AU)的相对变化来进行面部表情编辑的问题。这使我们能够以精细、连续和可解释的方式编辑特定人的表情,同时保留其身份、姿势、背景和详细的面部属性。我们模型MagicFace 的关键是一个以AU变化为条件的扩散模型和一个ID编码器,用于保留高一致性的面部细节。具体来说,为了保留输入身份的面部细节,我们利用预训练的稳定扩散模型的能力,并设计了一个ID编码器,通过自注意力合并外观特征。为了保持背景和姿势的一致性,我们引入了一个高效的属性控制器,明确告知模型目标的当前背景和姿势。通过将AU变化注入去噪UNet,我们的模型可以使用各种AU组合为任意身份赋予动画效果,在高保真度表情编辑方面相比其他面部表情编辑作品取得了优越的结果。代码公开可在 https://github.com/weimengting/MagicFace 获取。
English
We address the problem of facial expression editing by controling the
relative variation of facial action-unit (AU) from the same person. This
enables us to edit this specific person's expression in a fine-grained,
continuous and interpretable manner, while preserving their identity, pose,
background and detailed facial attributes. Key to our model, which we dub
MagicFace, is a diffusion model conditioned on AU variations and an ID encoder
to preserve facial details of high consistency. Specifically, to preserve the
facial details with the input identity, we leverage the power of pretrained
Stable-Diffusion models and design an ID encoder to merge appearance features
through self-attention. To keep background and pose consistency, we introduce
an efficient Attribute Controller by explicitly informing the model of current
background and pose of the target. By injecting AU variations into a denoising
UNet, our model can animate arbitrary identities with various AU combinations,
yielding superior results in high-fidelity expression editing compared to other
facial expression editing works. Code is publicly available at
https://github.com/weimengting/MagicFace.Summary
AI-Generated Summary