ChatPaper.aiChatPaper

扩散自蒸馏用于零样本定制图像生成

Diffusion Self-Distillation for Zero-Shot Customized Image Generation

November 27, 2024
作者: Shengqu Cai, Eric Chan, Yunzhi Zhang, Leonidas Guibas, Jiajun Wu, Gordon Wetzstein
cs.AI

摘要

文本到图像扩散模型产生令人印象深刻的结果,但对于希望精细控制的艺术家来说,这些工具令人沮丧。例如,一个常见的用例是在新颖环境中创建特定实例的图像,即“保持身份生成”。这种情境,以及许多其他任务(例如,重新照明),是图像+文本条件生成模型的自然选择。然而,目前缺乏高质量的配对数据来直接训练这样的模型。我们提出了扩散自蒸馏,这是一种利用预训练的文本到图像模型为文本条件的图像到图像任务生成自己数据集的方法。我们首先利用文本到图像扩散模型的上下文生成能力创建图像网格,并借助视觉语言模型的帮助筛选出一个大型配对数据集。然后,我们通过使用筛选后的配对数据集对文本到图像模型进行微调,将其转变为文本+图像到图像模型。我们证明,扩散自蒸馏在一系列保持身份生成任务中优于现有的零次调优方法,并且在不需要测试时优化的情况下,与每个实例调优技术竞争力相当。
English
Text-to-image diffusion models produce impressive results but are frustrating tools for artists who desire fine-grained control. For example, a common use case is to create images of a specific instance in novel contexts, i.e., "identity-preserving generation". This setting, along with many other tasks (e.g., relighting), is a natural fit for image+text-conditional generative models. However, there is insufficient high-quality paired data to train such a model directly. We propose Diffusion Self-Distillation, a method for using a pre-trained text-to-image model to generate its own dataset for text-conditioned image-to-image tasks. We first leverage a text-to-image diffusion model's in-context generation ability to create grids of images and curate a large paired dataset with the help of a Visual-Language Model. We then fine-tune the text-to-image model into a text+image-to-image model using the curated paired dataset. We demonstrate that Diffusion Self-Distillation outperforms existing zero-shot methods and is competitive with per-instance tuning techniques on a wide range of identity-preservation generation tasks, without requiring test-time optimization.

Summary

AI-Generated Summary

PDF166November 28, 2024