Eagle 2.5:推动前沿视觉-语言模型的长上下文后训练优化
Eagle 2.5: Boosting Long-Context Post-Training for Frontier Vision-Language Models
April 21, 2025
作者: Guo Chen, Zhiqi Li, Shihao Wang, Jindong Jiang, Yicheng Liu, Lidong Lu, De-An Huang, Wonmin Byeon, Matthieu Le, Tuomas Rintamaki, Tyler Poon, Max Ehrlich, Tuomas Rintamaki, Tyler Poon, Tong Lu, Limin Wang, Bryan Catanzaro, Jan Kautz, Andrew Tao, Zhiding Yu, Guilin Liu
cs.AI
摘要
我们推出Eagle 2.5,这是一系列面向长上下文多模态学习的前沿视觉-语言模型(VLMs)。我们的研究致力于解决长视频理解和高分辨率图像识别的挑战,提出了一个适用于这两项任务的通用框架。该训练框架融合了自动降质采样与图像区域保留两项技术,有效维护了上下文完整性与视觉细节。此外,框架在长上下文数据训练流程中引入了多项效率优化措施。最后,我们提出了Eagle-Video-110K,一个集成了故事级与片段级标注的新颖数据集,以促进长视频理解。Eagle 2.5在长上下文多模态基准测试中展现出显著提升,为现有VLMs的局限性提供了强有力的解决方案。特别值得一提的是,我们的最佳模型Eagle 2.5-8B在输入512帧的情况下,在Video-MME上取得了72.4%的成绩,与GPT-4o等顶级商业模型及Qwen2.5-VL-72B、InternVL2.5-78B等大规模开源模型的表现相当。
English
We introduce Eagle 2.5, a family of frontier vision-language models (VLMs)
for long-context multimodal learning. Our work addresses the challenges in long
video comprehension and high-resolution image understanding, introducing a
generalist framework for both tasks. The proposed training framework
incorporates Automatic Degrade Sampling and Image Area Preservation, two
techniques that preserve contextual integrity and visual details. The framework
also includes numerous efficiency optimizations in the pipeline for
long-context data training. Finally, we propose Eagle-Video-110K, a novel
dataset that integrates both story-level and clip-level annotations,
facilitating long-video understanding. Eagle 2.5 demonstrates substantial
improvements on long-context multimodal benchmarks, providing a robust solution
to the limitations of existing VLMs. Notably, our best model Eagle 2.5-8B
achieves 72.4% on Video-MME with 512 input frames, matching the results of
top-tier commercial model such as GPT-4o and large-scale open-source models
like Qwen2.5-VL-72B and InternVL2.5-78B.Summary
AI-Generated Summary