GrounDiT:通过带有噪声补丁移植的基础扩散变压器
GrounDiT: Grounding Diffusion Transformers via Noisy Patch Transplantation
October 27, 2024
作者: Phillip Y. Lee, Taehoon Yoon, Minhyuk Sung
cs.AI
摘要
我们引入了一种新颖的无需训练的空间定位技术,用于使用扩散Transformer(DiT)进行文本到图像生成。使用边界框进行空间定位因其简单性和多功能性而受到关注,在图像生成中提供了增强的用户控制。然而,先前的无需训练方法通常依赖于通过自定义损失函数从反向扩散过程中通过反向传播更新嘈杂图像,这经常难以提供对各个边界框的精确控制。在这项工作中,我们利用Transformer架构的灵活性,展示了DiT可以生成与每个边界框对应的嘈杂补丁,完全编码目标对象,并允许对每个区域进行精细控制。我们的方法建立在DiT的一个有趣特性上,我们称之为语义共享。由于语义共享,当一个较小的补丁与可生成大小的图像一起联合去噪时,两者变成了“语义克隆”。每个补丁在生成过程的自己分支中去噪,然后在每个时间步骤将其移植到原始嘈杂图像的相应区域,从而为每个边界框实现了稳健的空间定位。在我们对HRS和DrawBench基准测试的实验中,与先前的无需训练的空间定位方法相比,我们实现了最先进的性能。
English
We introduce a novel training-free spatial grounding technique for
text-to-image generation using Diffusion Transformers (DiT). Spatial grounding
with bounding boxes has gained attention for its simplicity and versatility,
allowing for enhanced user control in image generation. However, prior
training-free approaches often rely on updating the noisy image during the
reverse diffusion process via backpropagation from custom loss functions, which
frequently struggle to provide precise control over individual bounding boxes.
In this work, we leverage the flexibility of the Transformer architecture,
demonstrating that DiT can generate noisy patches corresponding to each
bounding box, fully encoding the target object and allowing for fine-grained
control over each region. Our approach builds on an intriguing property of DiT,
which we refer to as semantic sharing. Due to semantic sharing, when a smaller
patch is jointly denoised alongside a generatable-size image, the two become
"semantic clones". Each patch is denoised in its own branch of the generation
process and then transplanted into the corresponding region of the original
noisy image at each timestep, resulting in robust spatial grounding for each
bounding box. In our experiments on the HRS and DrawBench benchmarks, we
achieve state-of-the-art performance compared to previous training-free spatial
grounding approaches.Summary
AI-Generated Summary