ChatPaper.aiChatPaper

推理时重缩放离散扩散模型的掩码策略

Remasking Discrete Diffusion Models with Inference-Time Scaling

March 1, 2025
作者: Guanghan Wang, Yair Schiff, Subham Sekhar Sahoo, Volodymyr Kuleshov
cs.AI

摘要

扩散模型的部分成功源于其执行迭代优化的能力,即在生成过程中反复修正输出。然而,现代掩码离散扩散模型缺乏这一功能:一旦生成一个标记,即使它引入了错误,也无法再次更新。在此,我们通过引入重掩码扩散模型(ReMDM)采样器来解决这一局限,该方法可以原则性地应用于预训练的掩码扩散模型,并源自具有自定义重掩码逆向过程的离散扩散模型。尤为引人注目的是,ReMDM赋予了离散扩散模型一种推理时计算规模调整的能力。通过增加采样步骤,ReMDM生成的自然语言输出接近自回归模型的质量,而在计算预算有限时,ReMDM能更好地保持质量。ReMDM还提升了掩码扩散模型在离散化图像上的样本质量,在诸如分子设计等科学领域,ReMDM促进了扩散引导,并相对于经典掩码和均匀噪声扩散,推动了可控性的帕累托前沿。我们在项目页面提供了代码及博客文章:https://remdm.github.io。
English
Part of the success of diffusion models stems from their ability to perform iterative refinement, i.e., repeatedly correcting outputs during generation. However, modern masked discrete diffusion lacks this capability: when a token is generated, it cannot be updated again, even when it introduces an error. Here, we address this limitation by introducing the remasking diffusion model (ReMDM) sampler, a method that can be applied to pretrained masked diffusion models in a principled way and that is derived from a discrete diffusion model with a custom remasking backward process. Most interestingly, ReMDM endows discrete diffusion with a form of inference-time compute scaling. By increasing the number of sampling steps, ReMDM generates natural language outputs that approach the quality of autoregressive models, whereas when the computation budget is limited, ReMDM better maintains quality. ReMDM also improves sample quality of masked diffusion models for discretized images, and in scientific domains such as molecule design, ReMDM facilitates diffusion guidance and pushes the Pareto frontier of controllability relative to classical masking and uniform noise diffusion. We provide the code along with a blog post on the project page: https://remdm.github.io.

Summary

AI-Generated Summary

PDF92March 6, 2025