ChatPaper.aiChatPaper

透過聯合圖像-特徵合成提升生成式圖像建模

Boosting Generative Image Modeling via Joint Image-Feature Synthesis

April 22, 2025
作者: Theodoros Kouzelis, Efstathios Karypidis, Ioannis Kakogeorgiou, Spyros Gidaris, Nikos Komodakis
cs.AI

摘要

潛在擴散模型(LDMs)在高品質圖像生成領域佔據主導地位,然而將表徵學習與生成建模相結合仍是一大挑戰。我們提出了一種新穎的生成圖像建模框架,該框架通過利用擴散模型來聯合建模低級圖像潛在特徵(來自變分自編碼器)和高級語義特徵(來自預訓練的自監督編碼器如DINO),無縫地彌合了這一差距。我們的潛在語義擴散方法學會從純噪聲中生成連貫的圖像-特徵對,顯著提升了生成質量和訓練效率,同時僅需對標準擴散變壓器架構進行最小程度的修改。通過消除對複雜蒸餾目標的需求,我們的統一設計簡化了訓練,並解鎖了一種強大的新推理策略:表徵引導,該策略利用學習到的語義來引導和精煉圖像生成。在條件和非條件設置下進行評估,我們的方法在圖像質量和訓練收斂速度方面均取得了顯著提升,為表徵感知的生成建模開闢了新的方向。
English
Latent diffusion models (LDMs) dominate high-quality image generation, yet integrating representation learning with generative modeling remains a challenge. We introduce a novel generative image modeling framework that seamlessly bridges this gap by leveraging a diffusion model to jointly model low-level image latents (from a variational autoencoder) and high-level semantic features (from a pretrained self-supervised encoder like DINO). Our latent-semantic diffusion approach learns to generate coherent image-feature pairs from pure noise, significantly enhancing both generative quality and training efficiency, all while requiring only minimal modifications to standard Diffusion Transformer architectures. By eliminating the need for complex distillation objectives, our unified design simplifies training and unlocks a powerful new inference strategy: Representation Guidance, which leverages learned semantics to steer and refine image generation. Evaluated in both conditional and unconditional settings, our method delivers substantial improvements in image quality and training convergence speed, establishing a new direction for representation-aware generative modeling.

Summary

AI-Generated Summary

PDF122April 25, 2025