ChatPaper.aiChatPaper

《模拟任何人2:具有环境可负担性的高保真角色图像动画》

Animate Anyone 2: High-Fidelity Character Image Animation with Environment Affordance

February 10, 2025
作者: Li Hu, Guangyuan Wang, Zhen Shen, Xin Gao, Dechao Meng, Lian Zhuo, Peng Zhang, Bang Zhang, Liefeng Bo
cs.AI

摘要

基于扩散模型的最新角色形象动画方法,如Animate Anyone,已在生成一致且可泛化的角色动画方面取得了显著进展。然而,这些方法未能产生角色与其环境之间合理的关联。为解决这一局限性,我们引入了Animate Anyone 2,旨在为角色赋予环境可负担性。除了从源视频中提取运动信号外,我们还将环境表示形式作为条件输入进行捕捉。环境被构建为除角色外的区域,我们的模型生成角色以填充这些区域,同时与环境背景保持一致性。我们提出了一种形状不可知的遮罩策略,更有效地描述了角色与环境之间的关系。此外,为增强物体交互的保真度,我们利用物体引导器提取相互作用物体的特征,并采用空间混合进行特征注入。我们还引入了一种姿势调制策略,使模型能够处理更多样化的运动模式。实验结果表明了所提方法的卓越性能。
English
Recent character image animation methods based on diffusion models, such as Animate Anyone, have made significant progress in generating consistent and generalizable character animations. However, these approaches fail to produce reasonable associations between characters and their environments. To address this limitation, we introduce Animate Anyone 2, aiming to animate characters with environment affordance. Beyond extracting motion signals from source video, we additionally capture environmental representations as conditional inputs. The environment is formulated as the region with the exclusion of characters and our model generates characters to populate these regions while maintaining coherence with the environmental context. We propose a shape-agnostic mask strategy that more effectively characterizes the relationship between character and environment. Furthermore, to enhance the fidelity of object interactions, we leverage an object guider to extract features of interacting objects and employ spatial blending for feature injection. We also introduce a pose modulation strategy that enables the model to handle more diverse motion patterns. Experimental results demonstrate the superior performance of the proposed method.

Summary

AI-Generated Summary

PDF164February 13, 2025