ChatPaper.aiChatPaper

隨機自回歸視覺生成

Randomized Autoregressive Visual Generation

November 1, 2024
作者: Qihang Yu, Ju He, Xueqing Deng, Xiaohui Shen, Liang-Chieh Chen
cs.AI

摘要

本文提出了用於視覺生成的隨機自回歸建模(RAR),在保持與語言建模框架完全兼容的同時,在圖像生成任務上取得了新的最先進性能。所提出的RAR方法很簡單:在標準的自回歸訓練過程中,採用下一個令牌預測目標,輸入序列通常以光柵形式排序,以概率r隨機排列成不同的因式分解順序,其中r從1開始,並在訓練過程中線性衰減至0。這種退火訓練策略使模型能夠學會最大化對所有因式分解順序的期望概然性,從而有效提高模型建模雙向上下文的能力。重要的是,RAR保留了自回歸建模框架的完整性,確保與語言建模完全兼容,同時在圖像生成方面顯著提高了性能。在ImageNet-256基準測試中,RAR實現了1.48的FID分數,不僅超越了先前最先進的自回歸圖像生成器,還優於領先的基於擴散和基於遮罩的變壓器方法。代碼和模型將在https://github.com/bytedance/1d-tokenizer 提供。
English
This paper presents Randomized AutoRegressive modeling (RAR) for visual generation, which sets a new state-of-the-art performance on the image generation task while maintaining full compatibility with language modeling frameworks. The proposed RAR is simple: during a standard autoregressive training process with a next-token prediction objective, the input sequence-typically ordered in raster form-is randomly permuted into different factorization orders with a probability r, where r starts at 1 and linearly decays to 0 over the course of training. This annealing training strategy enables the model to learn to maximize the expected likelihood over all factorization orders and thus effectively improve the model's capability of modeling bidirectional contexts. Importantly, RAR preserves the integrity of the autoregressive modeling framework, ensuring full compatibility with language modeling while significantly improving performance in image generation. On the ImageNet-256 benchmark, RAR achieves an FID score of 1.48, not only surpassing prior state-of-the-art autoregressive image generators but also outperforming leading diffusion-based and masked transformer-based methods. Code and models will be made available at https://github.com/bytedance/1d-tokenizer

Summary

AI-Generated Summary

PDF173November 13, 2024