统一多模态离散扩散
Unified Multimodal Discrete Diffusion
March 26, 2025
作者: Alexander Swerdlow, Mihir Prabhudesai, Siddharth Gandhi, Deepak Pathak, Katerina Fragkiadaki
cs.AI
摘要
能够跨多种模态进行理解和生成的多模态生成模型,目前主要由自回归(AR)方法主导,这些方法从左到右或从上到下顺序处理标记。这些模型联合处理图像、文本、视频和音频,用于图像描述、问答和图像生成等多种任务。在本研究中,我们探索离散扩散模型作为文本与图像联合领域的统一生成框架,基于其在文本生成领域的最新成功。离散扩散模型相较于AR模型具有多项优势,包括在生成样本的质量与多样性之间更好的控制能力、跨文本和图像领域的联合多模态修复能力,以及通过引导实现更强的生成可控性。利用这些优势,我们提出了首个统一多模态离散扩散(UniDisc)模型,该模型能够联合理解和生成文本与图像,适用于多种下游任务。我们将UniDisc与多模态AR模型进行比较,进行了规模分析,并证明UniDisc在性能、推理计算效率、增强的可控性、可编辑性、修复能力以及推理时间与生成质量之间的灵活权衡方面均优于后者。代码及更多可视化内容请访问https://unidisc.github.io。
English
Multimodal generative models that can understand and generate across multiple
modalities are dominated by autoregressive (AR) approaches, which process
tokens sequentially from left to right, or top to bottom. These models jointly
handle images, text, video, and audio for various tasks such as image
captioning, question answering, and image generation. In this work, we explore
discrete diffusion models as a unified generative formulation in the joint text
and image domain, building upon their recent success in text generation.
Discrete diffusion models offer several advantages over AR models, including
improved control over quality versus diversity of generated samples, the
ability to perform joint multimodal inpainting (across both text and image
domains), and greater controllability in generation through guidance.
Leveraging these benefits, we present the first Unified Multimodal Discrete
Diffusion (UniDisc) model which is capable of jointly understanding and
generating text and images for a variety of downstream tasks. We compare
UniDisc to multimodal AR models, performing a scaling analysis and
demonstrating that UniDisc outperforms them in terms of both performance and
inference-time compute, enhanced controllability, editability, inpainting, and
flexible trade-off between inference time and generation quality. Code and
additional visualizations are available at https://unidisc.github.io.Summary
AI-Generated Summary