关于视觉自回归模型的计算限制和可证明高效准则:一项细粒度复杂性分析

On Computational Limits and Provably Efficient Criteria of Visual Autoregressive Models: A Fine-Grained Complexity Analysis

January 8, 2025
作者: Yekun Ke, Xiaoyu Li, Yingyu Liang, Zhizhou Sha, Zhenmei Shi, Zhao Song
cs.AI

摘要

最近,视觉自回归(VAR)模型在图像生成领域引入了一项突破性进展,通过粗到精的“下一尺度预测”范式提供了一种可扩展的方法。然而,VAR模型在[Tian, Jiang, Yuan, Peng和Wang,NeurIPS 2024]中的最新算法需要O(n^4)的时间,这在计算上效率低下。在这项工作中,我们通过细粒度复杂度视角分析了VAR模型的计算限制和效率标准。我们的主要贡献是确定了VAR计算可以实现次二次时间复杂度的条件。具体而言,我们建立了用于VAR注意机制中输入矩阵范数的临界阈值。在超过此阈值时,假设来自细粒度复杂度理论的强指数时间假设(SETH),VAR模型的次四次时间算法是不可能的。为了证实我们的理论发现,我们提出了利用与推导标准一致的低秩逼近的高效构造。这项工作从理论角度开始研究VAR模型的计算效率。我们的技术将有助于推动VAR框架中可扩展且高效的图像生成。
English
Recently, Visual Autoregressive (VAR) Models introduced a groundbreaking advancement in the field of image generation, offering a scalable approach through a coarse-to-fine "next-scale prediction" paradigm. However, the state-of-the-art algorithm of VAR models in [Tian, Jiang, Yuan, Peng and Wang, NeurIPS 2024] takes O(n^4) time, which is computationally inefficient. In this work, we analyze the computational limits and efficiency criteria of VAR Models through a fine-grained complexity lens. Our key contribution is identifying the conditions under which VAR computations can achieve sub-quadratic time complexity. Specifically, we establish a critical threshold for the norm of input matrices used in VAR attention mechanisms. Above this threshold, assuming the Strong Exponential Time Hypothesis (SETH) from fine-grained complexity theory, a sub-quartic time algorithm for VAR models is impossible. To substantiate our theoretical findings, we present efficient constructions leveraging low-rank approximations that align with the derived criteria. This work initiates the study of the computational efficiency of the VAR model from a theoretical perspective. Our technique will shed light on advancing scalable and efficient image generation in VAR frameworks.

Summary

AI-Generated Summary

PDF132January 10, 2025