一致性模型中的不一致性:更好的常微分方程求解並不意味著更好的樣本
Inconsistencies In Consistency Models: Better ODE Solving Does Not Imply Better Samples
November 13, 2024
作者: Noël Vouitsis, Rasa Hosseinzadeh, Brendan Leigh Ross, Valentin Villecroze, Satya Krishna Gorti, Jesse C. Cresswell, Gabriel Loaiza-Ganem
cs.AI
摘要
儘管擴散模型能夠生成品質極高的樣本,但其昂貴的迭代採樣程序內在上存在瓶頸。一致性模型(CMs)最近已嶄露頭角,作為一種有前途的擴散模型蒸餾方法,通過僅需少數迭代即可生成高保真度樣本,從而降低採樣成本。一致性模型蒸餾的目標是解決由現有擴散模型定義的概率流常微分方程(ODE)。CMs並非直接訓練以最小化對ODE求解器的誤差,而是使用更易於計算的客觀函數。為了研究CMs如何有效解決概率流ODE以及任何誘發誤差對生成樣本品質的影響,我們引入了直接CMs,直接最小化這種誤差。有趣的是,我們發現與CMs相比,直接CMs降低了ODE求解誤差,但也導致生成樣本品質顯著下降,這引發了對CMs為何起初表現良好的質疑。完整代碼可在以下鏈接找到:https://github.com/layer6ai-labs/direct-cms。
English
Although diffusion models can generate remarkably high-quality samples, they
are intrinsically bottlenecked by their expensive iterative sampling procedure.
Consistency models (CMs) have recently emerged as a promising diffusion model
distillation method, reducing the cost of sampling by generating high-fidelity
samples in just a few iterations. Consistency model distillation aims to solve
the probability flow ordinary differential equation (ODE) defined by an
existing diffusion model. CMs are not directly trained to minimize error
against an ODE solver, rather they use a more computationally tractable
objective. As a way to study how effectively CMs solve the probability flow
ODE, and the effect that any induced error has on the quality of generated
samples, we introduce Direct CMs, which directly minimize this error.
Intriguingly, we find that Direct CMs reduce the ODE solving error compared to
CMs but also result in significantly worse sample quality, calling into
question why exactly CMs work well in the first place. Full code is available
at: https://github.com/layer6ai-labs/direct-cms.Summary
AI-Generated Summary