直接判别优化:你的基于似然的视觉生成模型本质上是一个GAN判别器
Direct Discriminative Optimization: Your Likelihood-Based Visual Generative Model is Secretly a GAN Discriminator
March 3, 2025
作者: Kaiwen Zheng, Yongxin Chen, Huayu Chen, Guande He, Ming-Yu Liu, Jun Zhu, Qinsheng Zhang
cs.AI
摘要
尽管基于似然的生成模型,尤其是扩散模型和自回归模型,在视觉生成方面已取得了显著的保真度,但最大似然估计(MLE)目标本身存在一种模式覆盖倾向,这在模型能力有限的情况下限制了生成质量。本研究中,我们提出了直接判别优化(DDO)作为一个统一框架,它桥接了基于似然的生成训练与GAN目标,以绕过这一根本性限制。我们的核心洞见在于,通过利用可学习目标模型与固定参考模型之间的似然比,隐式参数化一个判别器,这与直接偏好优化(DPO)的理念相呼应。与GAN不同,这种参数化方法无需联合训练生成器和判别器网络,从而能够直接、高效且有效地微调已训练好的模型,使其超越MLE的局限,发挥全部潜能。DDO可以以自我博弈的方式迭代进行,逐步优化模型,每一轮所需的预训练周期不到1%。我们的实验验证了DDO的有效性,显著提升了先前SOTA扩散模型EDM的性能,在CIFAR-10/ImageNet-64数据集上将FID分数从1.79/1.58降至新纪录1.30/0.97,并在ImageNet 256×256上持续改善了视觉自回归模型的无引导及CFG增强的FID指标。
English
While likelihood-based generative models, particularly diffusion and
autoregressive models, have achieved remarkable fidelity in visual generation,
the maximum likelihood estimation (MLE) objective inherently suffers from a
mode-covering tendency that limits the generation quality under limited model
capacity. In this work, we propose Direct Discriminative Optimization (DDO) as
a unified framework that bridges likelihood-based generative training and the
GAN objective to bypass this fundamental constraint. Our key insight is to
parameterize a discriminator implicitly using the likelihood ratio between a
learnable target model and a fixed reference model, drawing parallels with the
philosophy of Direct Preference Optimization (DPO). Unlike GANs, this
parameterization eliminates the need for joint training of generator and
discriminator networks, allowing for direct, efficient, and effective
finetuning of a well-trained model to its full potential beyond the limits of
MLE. DDO can be performed iteratively in a self-play manner for progressive
model refinement, with each round requiring less than 1% of pretraining epochs.
Our experiments demonstrate the effectiveness of DDO by significantly advancing
the previous SOTA diffusion model EDM, reducing FID scores from 1.79/1.58 to
new records of 1.30/0.97 on CIFAR-10/ImageNet-64 datasets, and by consistently
improving both guidance-free and CFG-enhanced FIDs of visual autoregressive
models on ImageNet 256times256.Summary
AI-Generated Summary