關於視覺自回歸模型的計算限制和可證明有效標準:一項細緻複雜度分析。

On Computational Limits and Provably Efficient Criteria of Visual Autoregressive Models: A Fine-Grained Complexity Analysis

January 8, 2025
作者: Yekun Ke, Xiaoyu Li, Yingyu Liang, Zhizhou Sha, Zhenmei Shi, Zhao Song
cs.AI

摘要

最近,視覺自回歸(VAR)模型在圖像生成領域引入了一項突破性進展,通過一種從粗到細的“下一尺度預測”範式提供了可擴展的方法。然而,在[Tian, Jiang, Yuan, Peng和Wang, NeurIPS 2024]中,VAR模型的最新算法需要O(n^4)的時間,這在計算上效率低下。在這項工作中,我們通過一個細粒度的複雜度視角分析了VAR模型的計算限制和效率標準。我們的主要貢獻是確定了VAR計算可以實現次二次時間複雜度的條件。具體來說,我們確立了在VAR注意機制中使用的輸入矩陣範數的臨界閾值。在此閾值之上,假設從細粒度複雜度理論中的強指數時間假設(SETH),對於VAR模型來說,次四次時間算法是不可能的。為了證實我們的理論發現,我們提出了利用低秩近似的高效構造,與所得標準相符。這項工作從理論角度開始研究VAR模型的計算效率。我們的技術將有助於推動VAR框架中可擴展和高效的圖像生成。
English
Recently, Visual Autoregressive (VAR) Models introduced a groundbreaking advancement in the field of image generation, offering a scalable approach through a coarse-to-fine "next-scale prediction" paradigm. However, the state-of-the-art algorithm of VAR models in [Tian, Jiang, Yuan, Peng and Wang, NeurIPS 2024] takes O(n^4) time, which is computationally inefficient. In this work, we analyze the computational limits and efficiency criteria of VAR Models through a fine-grained complexity lens. Our key contribution is identifying the conditions under which VAR computations can achieve sub-quadratic time complexity. Specifically, we establish a critical threshold for the norm of input matrices used in VAR attention mechanisms. Above this threshold, assuming the Strong Exponential Time Hypothesis (SETH) from fine-grained complexity theory, a sub-quartic time algorithm for VAR models is impossible. To substantiate our theoretical findings, we present efficient constructions leveraging low-rank approximations that align with the derived criteria. This work initiates the study of the computational efficiency of the VAR model from a theoretical perspective. Our technique will shed light on advancing scalable and efficient image generation in VAR frameworks.

Summary

AI-Generated Summary

PDF132January 10, 2025