清晰:類卷積線性化改進預訓練擴散Transformer 更新

CLEAR: Conv-Like Linearization Revs Pre-Trained Diffusion Transformers Up

December 20, 2024
作者: Songhua Liu, Zhenxiong Tan, Xinchao Wang
cs.AI

摘要

擴散Transformer(DiT)已成為圖像生成中的領先架構。然而,負責建模標記關係的注意機制的二次複雜度在生成高分辨率圖像時導致顯著的延遲。為了解決這個問題,本文旨在引入一種線性注意機制,將預先訓練的DiT的複雜度降低到線性。我們從對現有高效注意機制的全面總結開始探索,並確定了四個對於成功線性化預先訓練的DiT至關重要的關鍵因素:局部性、公式一致性、高秩注意力圖和特徵完整性。基於這些見解,我們引入了一種類似卷積的局部注意策略,稱為CLEAR,它將特徵交互限制在每個查詢標記周圍的局部窗口,從而實現線性複雜度。我們的實驗表明,通過僅在10K個自生成樣本上對注意層進行微調10K次迭代,我們可以有效地將知識從預先訓練的DiT轉移到具有線性複雜度的學生模型,產生與教師模型相媲美的結果。同時,它將注意計算減少了99.5%,並加速了生成8K分辨率圖像的速度6.3倍。此外,我們研究了蒸餾注意層中的有利特性,例如跨各種模型和插件的零樣本泛化,以及對多GPU並行推理的改進支持。模型和代碼可在此處找到:https://github.com/Huage001/CLEAR。
English
Diffusion Transformers (DiT) have become a leading architecture in image generation. However, the quadratic complexity of attention mechanisms, which are responsible for modeling token-wise relationships, results in significant latency when generating high-resolution images. To address this issue, we aim at a linear attention mechanism in this paper that reduces the complexity of pre-trained DiTs to linear. We begin our exploration with a comprehensive summary of existing efficient attention mechanisms and identify four key factors crucial for successful linearization of pre-trained DiTs: locality, formulation consistency, high-rank attention maps, and feature integrity. Based on these insights, we introduce a convolution-like local attention strategy termed CLEAR, which limits feature interactions to a local window around each query token, and thus achieves linear complexity. Our experiments indicate that, by fine-tuning the attention layer on merely 10K self-generated samples for 10K iterations, we can effectively transfer knowledge from a pre-trained DiT to a student model with linear complexity, yielding results comparable to the teacher model. Simultaneously, it reduces attention computations by 99.5% and accelerates generation by 6.3 times for generating 8K-resolution images. Furthermore, we investigate favorable properties in the distilled attention layers, such as zero-shot generalization cross various models and plugins, and improved support for multi-GPU parallel inference. Models and codes are available here: https://github.com/Huage001/CLEAR.

Summary

AI-Generated Summary

PDF215December 23, 2024