個人化的三維生成化身:從單一肖像製作
PERSE: Personalized 3D Generative Avatars from A Single Portrait
December 30, 2024
作者: Hyunsoo Cha, Inhee Lee, Hanbyul Joo
cs.AI
摘要
我們提出了一種名為PERSE的方法,用於從參考肖像建立一個可動畫且個性化的生成頭像。我們的頭像模型能夠在連續且解耦的潛在空間中進行面部屬性編輯,以控制每個面部屬性,同時保留個人的身份特徵。為了實現這一目標,我們的方法首先通過合成大規模的2D合成視頻數據集來開始,其中每個視頻包含面部表情和視角的一致變化,結合原始輸入中特定面部屬性的變化。我們提出了一種新的流程,用於生成具有面部屬性編輯的高質量、照片逼真的2D視頻。利用這個合成屬性數據集,我們提出了一種基於3D高斯散點的個性化頭像創建方法,學習一個連續且解耦的潛在空間,用於直觀地進行面部屬性操作。為了在這個潛在空間中實現平滑過渡,我們引入了一種潛在空間正則化技術,通過使用插值的2D面部作為監督。與先前的方法相比,我們展示了PERSE生成具有插值屬性的高質量頭像,同時保留了參考人物的身份特徵。
English
We present PERSE, a method for building an animatable personalized generative
avatar from a reference portrait. Our avatar model enables facial attribute
editing in a continuous and disentangled latent space to control each facial
attribute, while preserving the individual's identity. To achieve this, our
method begins by synthesizing large-scale synthetic 2D video datasets, where
each video contains consistent changes in the facial expression and viewpoint,
combined with a variation in a specific facial attribute from the original
input. We propose a novel pipeline to produce high-quality, photorealistic 2D
videos with facial attribute editing. Leveraging this synthetic attribute
dataset, we present a personalized avatar creation method based on the 3D
Gaussian Splatting, learning a continuous and disentangled latent space for
intuitive facial attribute manipulation. To enforce smooth transitions in this
latent space, we introduce a latent space regularization technique by using
interpolated 2D faces as supervision. Compared to previous approaches, we
demonstrate that PERSE generates high-quality avatars with interpolated
attributes while preserving identity of reference person.Summary
AI-Generated Summary