StrandHead:使用頭髮幾何先驗將文本轉換為解耦合的3D頭像

StrandHead: Text to Strand-Disentangled 3D Head Avatars Using Hair Geometric Priors

December 16, 2024
作者: Xiaokun Sun, Zeyu Cai, Zhenyu Zhang, Ying Tai, Jian Yang
cs.AI

摘要

儘管髮型表現出獨特的個性,現有的頭像生成方法卻無法模擬實用的髮型,因為其使用了一般或交錯的表示法。我們提出了StrandHead,一種新穎的文本轉3D頭像生成方法,能夠生成具有線條表示的解耦合3D髮型。在不使用3D數據進行監督的情況下,我們展示了可以通過提煉2D生成擴散模型從提示中生成逼真的髮絲。為此,我們提出了一系列可靠的先驗知識,包括形狀初始化、幾何基元和統計髮型特徵,從而實現穩定的優化和與文本對齊的性能。大量實驗表明,StrandHead實現了生成的3D頭像和髮型的最新現實性和多樣性。生成的3D髮型也可以輕鬆應用於虛幻引擎進行物理模擬和其他應用。代碼將可在https://xiaokunsun.github.io/StrandHead.github.io 上獲得。
English
While haircut indicates distinct personality, existing avatar generation methods fail to model practical hair due to the general or entangled representation. We propose StrandHead, a novel text to 3D head avatar generation method capable of generating disentangled 3D hair with strand representation. Without using 3D data for supervision, we demonstrate that realistic hair strands can be generated from prompts by distilling 2D generative diffusion models. To this end, we propose a series of reliable priors on shape initialization, geometric primitives, and statistical haircut features, leading to a stable optimization and text-aligned performance. Extensive experiments show that StrandHead achieves the state-of-the-art reality and diversity of generated 3D head and hair. The generated 3D hair can also be easily implemented in the Unreal Engine for physical simulation and other applications. The code will be available at https://xiaokunsun.github.io/StrandHead.github.io.

Summary

AI-Generated Summary

PDF112December 17, 2024