令牌的隐秘生活:通过视觉信息引导减少大型视觉-语言模型的幻觉
The Hidden Life of Tokens: Reducing Hallucination of Large Vision-Language Models via Visual Information Steering
February 5, 2025
作者: Zhuowei Li, Haizhou Shi, Yunhe Gao, Di Liu, Zhenting Wang, Yuxiao Chen, Ting Liu, Long Zhao, Hao Wang, Dimitris N. Metaxas
cs.AI
摘要
大型视觉语言模型(LVLMs)能够有效地处理文本和视觉输入,但往往会产生在语法上连贯但在视觉上没有依据的内容。本文通过研究内部幻觉动态,通过检查生成过程中的标记logits排名,揭示了LVLMs处理信息的三个关键模式:(1)逐渐丢失视觉信息--在生成过程中,具有视觉依据的标记逐渐变得不受青睐;(2)早期激发--语义上有意义的标记在较早的层中达到峰值激活,早于最终层;(3)隐藏的真实信息--具有视觉依据的标记虽然最终未被确定,但在推理过程中仍保持相对较高的排名。基于这些见解,我们提出了VISTA(具有标记-对数增强的视觉信息引导),这是一个无需训练的推理时干预框架,可以减少幻觉,同时促进真实信息。VISTA通过结合两种互补方法实现:在激活空间中加强视觉信息,利用早期层激活来促进语义上有意义的解码。与现有方法相比,VISTA无需外部监督,并适用于各种解码策略。大量实验证明,在评估的开放式生成任务中,VISTA平均减少了约40%的幻觉,并且在三种解码策略下的四个架构上的四个基准测试中,始终优于现有方法。
English
Large Vision-Language Models (LVLMs) can reason effectively over both textual
and visual inputs, but they tend to hallucinate syntactically coherent yet
visually ungrounded contents. In this paper, we investigate the internal
dynamics of hallucination by examining the tokens logits rankings throughout
the generation process, revealing three key patterns in how LVLMs process
information: (1) gradual visual information loss -- visually grounded tokens
gradually become less favored throughout generation, and (2) early excitation
-- semantically meaningful tokens achieve peak activation in the layers earlier
than the final layer. (3) hidden genuine information -- visually grounded
tokens though not being eventually decided still retain relatively high
rankings at inference. Based on these insights, we propose VISTA (Visual
Information Steering with Token-logit Augmentation), a training-free
inference-time intervention framework that reduces hallucination while
promoting genuine information. VISTA works by combining two complementary
approaches: reinforcing visual information in activation space and leveraging
early layer activations to promote semantically meaningful decoding. Compared
to existing methods, VISTA requires no external supervision and is applicable
to various decoding strategies. Extensive experiments show that VISTA on
average reduces hallucination by abount 40% on evaluated open-ended generation
task, and it consistently outperforms existing methods on four benchmarks
across four architectures under three decoding strategies.Summary
AI-Generated Summary