Make-It-Animatable: 一個有效的框架,用於製作動畫就緒的3D角色

Make-It-Animatable: An Efficient Framework for Authoring Animation-Ready 3D Characters

November 27, 2024
作者: Zhiyang Guo, Jinxu Xiang, Kai Ma, Wengang Zhou, Houqiang Li, Ran Zhang
cs.AI

摘要

現代創意產業中,3D角色是不可或缺的,但使它們具有動畫性往往需要大量手動工作,如骨骼綁定和皮膚設定。現有的自動骨骼綁定工具存在幾個限制,包括需要手動標註、僵硬的骨架拓撲和在不同形狀和姿勢之間的泛化能力有限。另一種方法是生成可動化的預綁定到骨骼模板網格的化身。然而,這種方法通常缺乏靈活性,並且通常僅限於逼真的人體形狀。為了解決這些問題,我們提出了一種新穎的數據驅動方法,稱為Make-It-Animatable,可以使任何3D人形模型在不到一秒的時間內準備好進行角色動畫,無論其形狀和姿勢如何。我們的統一框架生成高質量的混合權重、骨骼和姿勢變換。通過結合基於粒子的形狀自編碼器,我們的方法支持各種3D表示,包括網格和3D高斯斑點。此外,我們採用粗到細的表示和結構感知建模策略,以確保對於具有非標準骨架結構的角色,即使是精確和穩健的。我們進行了大量實驗來驗證我們框架的有效性。與現有方法相比,我們的方法在質量和速度方面都取得了顯著的改善。
English
3D characters are essential to modern creative industries, but making them animatable often demands extensive manual work in tasks like rigging and skinning. Existing automatic rigging tools face several limitations, including the necessity for manual annotations, rigid skeleton topologies, and limited generalization across diverse shapes and poses. An alternative approach is to generate animatable avatars pre-bound to a rigged template mesh. However, this method often lacks flexibility and is typically limited to realistic human shapes. To address these issues, we present Make-It-Animatable, a novel data-driven method to make any 3D humanoid model ready for character animation in less than one second, regardless of its shapes and poses. Our unified framework generates high-quality blend weights, bones, and pose transformations. By incorporating a particle-based shape autoencoder, our approach supports various 3D representations, including meshes and 3D Gaussian splats. Additionally, we employ a coarse-to-fine representation and a structure-aware modeling strategy to ensure both accuracy and robustness, even for characters with non-standard skeleton structures. We conducted extensive experiments to validate our framework's effectiveness. Compared to existing methods, our approach demonstrates significant improvements in both quality and speed.

Summary

AI-Generated Summary

PDF144November 28, 2024