ChatPaper.aiChatPaper

通过自回归表征对齐释放大语言模型在文本到图像生成中的潜力

Unleashing the Potential of Large Language Models for Text-to-Image Generation through Autoregressive Representation Alignment

March 10, 2025
作者: Xing Xie, Jiawei Liu, Ziyue Lin, Huijie Fan, Zhi Han, Yandong Tang, Liangqiong Qu
cs.AI

摘要

我们提出了自回归表示对齐(ARRA),这是一种新的训练框架,无需改变架构即可在自回归大语言模型(LLMs)中实现全局一致的文本到图像生成。与以往需要复杂架构重新设计的工作不同,ARRA通过全局视觉对齐损失和混合标记<HYBNEXT>,将LLM的隐藏状态与外部视觉基础模型的视觉表示对齐。<HYBNEXT>标记施加了双重约束:局部下一标记预测和全局语义蒸馏,使LLM能够在保持原有自回归范式的同时,隐式学习空间和上下文一致性。大量实验验证了ARRA的即插即用灵活性。当从仅用于文本生成的LLM或随机初始化开始训练时,ARRA在Chameleon和LlamaGen等先进自回归LLM上,分别将MIMIC-CXR、DeepEyeNet和ImageNet的FID降低了25.5%、8.8%和7.5%,且无需修改框架。对于领域适应,ARRA将通用LLM与专用模型(如BioMedCLIP)对齐,在医学影像(MIMIC-CXR)上比直接微调实现了18.6%的FID降低。通过证明训练目标的重设计——而不仅仅是架构创新——可以解决跨模态全局一致性挑战,ARRA为推进自回归模型提供了一种互补范式。代码和模型将公开发布,以推动自回归图像生成的发展。
English
We present Autoregressive Representation Alignment (ARRA), a new training framework that unlocks global-coherent text-to-image generation in autoregressive LLMs without architectural changes. Unlike prior work that requires complex architectural redesigns, ARRA aligns LLM hidden states with visual representations from external visual foundational models via a global visual alignment loss and a hybrid token, <HYBNEXT>. This token enforces dual constraints: local next-token prediction and global semantic distillation, enabling LLMs to implicitly learn spatial and contextual coherence while retaining their original autoregressive paradigm. Extensive experiments validate ARRA's plug-and-play versatility. When training from text-generation-only LLMs or random initialization, ARRA reduces FID by 25.5% (MIMIC-CXR), 8.8% (DeepEyeNet), and 7.5% (ImageNet) for advanced autoregressive LLMs like Chameleon and LlamaGen, all without framework modifications. For domain adaption, ARRA aligns general-purpose LLMs with specialized models (e.g., BioMedCLIP), achieving an 18.6% FID reduction over direct fine-tuning on medical imaging (MIMIC-CXR). By demonstrating that training objective redesign -- not just architectural innovation -- can resolve cross-modal global coherence challenges, ARRA offers a complementary paradigm for advancing autoregressive models. Code and models will be released to advance autoregressive image generation.

Summary

AI-Generated Summary

PDF131March 11, 2025