重新思考视觉语言模型的强化学习扩展:一个透明、从零开始的框架与全面评估方案
Rethinking RL Scaling for Vision Language Models: A Transparent, From-Scratch Framework and Comprehensive Evaluation Scheme
April 3, 2025
作者: Yan Ma, Steffi Chern, Xuyang Shen, Yiran Zhong, Pengfei Liu
cs.AI
摘要
强化学习(RL)近期在提升大型语言模型的推理能力方面展现出显著潜力,并正积极扩展至视觉-语言模型(VLMs)领域。然而,现有VLMs中的RL应用多依赖于高度工程化的框架,这不仅阻碍了研究的可复现性和可访问性,还缺乏标准化的评估协议,使得结果对比或训练动态解读变得困难。本研究提出了一种透明、从零开始的RL应用于VLMs的框架,提供了一个经过多个模型和数据集验证的简洁四步流程。此外,还引入了一套标准化评估方案,用以衡量训练动态及反思行为。在视觉推理任务上的大量实验揭示了关键实证发现:响应长度对随机种子敏感,反思与输出长度相关,且即便使用高质量数据,RL在泛化能力上持续超越监督微调(SFT)。这些发现连同所提出的框架,旨在建立一个可复现的基准,并支持更广泛的基于RL的VLM研究参与。
English
Reinforcement learning (RL) has recently shown strong potential in improving
the reasoning capabilities of large language models and is now being actively
extended to vision-language models (VLMs). However, existing RL applications in
VLMs often rely on heavily engineered frameworks that hinder reproducibility
and accessibility, while lacking standardized evaluation protocols, making it
difficult to compare results or interpret training dynamics. This work
introduces a transparent, from-scratch framework for RL in VLMs, offering a
minimal yet functional four-step pipeline validated across multiple models and
datasets. In addition, a standardized evaluation scheme is proposed to assess
training dynamics and reflective behaviors. Extensive experiments on visual
reasoning tasks uncover key empirical findings: response length is sensitive to
random seeds, reflection correlates with output length, and RL consistently
outperforms supervised fine-tuning (SFT) in generalization, even with
high-quality data. These findings, together with the proposed framework, aim to
establish a reproducible baseline and support broader engagement in RL-based
VLM research.Summary
AI-Generated Summary