LumiNet:潜在内在特性与扩散模型相遇:用于室内场景照明重新渲染
LumiNet: Latent Intrinsics Meets Diffusion Models for Indoor Scene Relighting
November 29, 2024
作者: Xiaoyan Xing, Konrad Groh, Sezer Karaoglu, Theo Gevers, Anand Bhattad
cs.AI
摘要
我们介绍了 LumiNet,这是一种利用生成模型和潜在内在表示进行有效光照转移的新型架构。给定源图像和目标光照图像,LumiNet 合成源场景的重新照明版本,捕捉目标光照。我们的方法有两个关键贡献:一种基于 StyleGAN 的重新照明模型的数据策略用于我们的训练,以及一个修改的基于扩散的 ControlNet,处理源图像的潜在内在属性和目标图像的潜在外在属性。我们通过一个学习的适配器(MLP)进一步改进光照转移,该适配器通过交叉注意力和微调注入目标的潜在外在属性。
与传统的 ControlNet 不同,后者从单个场景生成带有条件映射的图像,LumiNet 处理来自两个不同图像的潜在表示 - 保留源图像的几何和反照率,同时从目标图像转移光照特性。实验证明,我们的方法成功地在仅使用图像作为输入的情况下,在具有挑战性的室内场景上转移复杂的光照现象,包括镜面高光和间接照明,超越了现有方法。
English
We introduce LumiNet, a novel architecture that leverages generative models
and latent intrinsic representations for effective lighting transfer. Given a
source image and a target lighting image, LumiNet synthesizes a relit version
of the source scene that captures the target's lighting. Our approach makes two
key contributions: a data curation strategy from the StyleGAN-based relighting
model for our training, and a modified diffusion-based ControlNet that
processes both latent intrinsic properties from the source image and latent
extrinsic properties from the target image. We further improve lighting
transfer through a learned adaptor (MLP) that injects the target's latent
extrinsic properties via cross-attention and fine-tuning.
Unlike traditional ControlNet, which generates images with conditional maps
from a single scene, LumiNet processes latent representations from two
different images - preserving geometry and albedo from the source while
transferring lighting characteristics from the target. Experiments demonstrate
that our method successfully transfers complex lighting phenomena including
specular highlights and indirect illumination across scenes with varying
spatial layouts and materials, outperforming existing approaches on challenging
indoor scenes using only images as input.Summary
AI-Generated Summary