SynthLight:通过学习重新渲染合成人脸的扩散模型进行人像照明调整
SynthLight: Portrait Relighting with Diffusion Model by Learning to Re-render Synthetic Faces
January 16, 2025
作者: Sumit Chaturvedi, Mengwei Ren, Yannick Hold-Geoffroy, Jingyuan Liu, Julie Dorsey, Zhixin Shu
cs.AI
摘要
我们介绍了SynthLight,一种用于人像重光的扩散模型。我们的方法将图像重光框架为重新渲染问题,其中像素根据环境光照条件的变化而转换。利用基于物理的渲染引擎,我们合成了一个数据集,以模拟在不同光照下对3D头部资产进行这种光照条件下的转换。我们提出了两种训练和推断策略,以弥合合成和真实图像领域之间的差距:(1)多任务训练,利用没有光照标签的真实人像;(2)基于无分类器指导的推断时间扩散采样过程,利用输入人像以更好地保留细节。我们的方法推广到各种真实照片,并产生逼真的照明效果,包括镜面高光和投影阴影,同时保留主体的身份。我们在Light Stage数据上的定量实验表明,结果与最先进的重光方法相当。我们在野外图像上的定性结果展示了丰富且前所未有的照明效果。项目页面:https://vrroom.github.io/synthlight/
English
We introduce SynthLight, a diffusion model for portrait relighting. Our
approach frames image relighting as a re-rendering problem, where pixels are
transformed in response to changes in environmental lighting conditions. Using
a physically-based rendering engine, we synthesize a dataset to simulate this
lighting-conditioned transformation with 3D head assets under varying lighting.
We propose two training and inference strategies to bridge the gap between the
synthetic and real image domains: (1) multi-task training that takes advantage
of real human portraits without lighting labels; (2) an inference time
diffusion sampling procedure based on classifier-free guidance that leverages
the input portrait to better preserve details. Our method generalizes to
diverse real photographs and produces realistic illumination effects, including
specular highlights and cast shadows, while preserving the subject's identity.
Our quantitative experiments on Light Stage data demonstrate results comparable
to state-of-the-art relighting methods. Our qualitative results on in-the-wild
images showcase rich and unprecedented illumination effects. Project Page:
https://vrroom.github.io/synthlight/Summary
AI-Generated Summary