GraPE:用于组合式T2I合成的生成-规划-编辑框架

GraPE: A Generate-Plan-Edit Framework for Compositional T2I Synthesis

December 8, 2024
作者: Ashish Goswami, Satyam Kumar Modi, Santhosh Rishi Deshineni, Harman Singh, Prathosh A. P, Parag Singla
cs.AI

摘要

文本到图像(T2I)生成在扩散模型的推动下取得了显著进展,使得可以从文本提示生成逼真的图像。尽管取得了这一进展,现有方法在遵循复杂文本提示方面仍面临挑战,特别是那些需要组合和多步推理的提示。在面对这样复杂的指令时,最先进的模型往往在忠实地对对象属性和它们之间的关系建模方面出现错误。在这项工作中,我们提出了一种用于T2I合成的替代范式,将复杂的多步生成任务分解为三个步骤,(a)生成:我们首先使用现有的扩散模型生成图像;(b)规划:我们利用多模态LLM(MLLM)来识别生成图像中以个体对象及其属性为表达的错误,并生成所需的一系列校正步骤,形成一个编辑计划;(c)编辑:我们利用现有的文本引导图像编辑模型,按顺序执行我们的编辑计划,以获得符合原始指令的所需图像。我们的方法之所以强大,在于其模块化性质、无需训练,并可应用于任何组合的图像生成和编辑模型。作为额外贡献,我们还开发了一个能够进行组合编辑的模型,进一步提高了我们提出方法的整体准确性。我们的方法灵活地在推理时间计算和组合文本提示性能之间进行权衡。我们在3个基准测试和10个T2I模型(包括DALLE-3和最新的SD-3.5-Large)上进行了广泛的实验评估。我们的方法不仅提高了SOTA模型的性能,最多提高了3个百分点,还减小了较弱模型和较强模型之间的性能差距。
English
Text-to-image (T2I) generation has seen significant progress with diffusion models, enabling generation of photo-realistic images from text prompts. Despite this progress, existing methods still face challenges in following complex text prompts, especially those requiring compositional and multi-step reasoning. Given such complex instructions, SOTA models often make mistakes in faithfully modeling object attributes, and relationships among them. In this work, we present an alternate paradigm for T2I synthesis, decomposing the task of complex multi-step generation into three steps, (a) Generate: we first generate an image using existing diffusion models (b) Plan: we make use of Multi-Modal LLMs (MLLMs) to identify the mistakes in the generated image expressed in terms of individual objects and their properties, and produce a sequence of corrective steps required in the form of an edit-plan. (c) Edit: we make use of an existing text-guided image editing models to sequentially execute our edit-plan over the generated image to get the desired image which is faithful to the original instruction. Our approach derives its strength from the fact that it is modular in nature, is training free, and can be applied over any combination of image generation and editing models. As an added contribution, we also develop a model capable of compositional editing, which further helps improve the overall accuracy of our proposed approach. Our method flexibly trades inference time compute with performance on compositional text prompts. We perform extensive experimental evaluation across 3 benchmarks and 10 T2I models including DALLE-3 and the latest -- SD-3.5-Large. Our approach not only improves the performance of the SOTA models, by upto 3 points, it also reduces the performance gap between weaker and stronger models. https://dair-iitd.github.io/GraPE/{https://dair-iitd.github.io/GraPE/}

Summary

AI-Generated Summary

PDF42December 11, 2024