ChatPaper.aiChatPaper

NVILA:高效前沿视觉语言模型

NVILA: Efficient Frontier Visual Language Models

December 5, 2024
作者: Zhijian Liu, Ligeng Zhu, Baifeng Shi, Zhuoyang Zhang, Yuming Lou, Shang Yang, Haocheng Xi, Shiyi Cao, Yuxian Gu, Dacheng Li, Xiuyu Li, Yunhao Fang, Yukang Chen, Cheng-Yu Hsieh, De-An Huang, An-Chieh Cheng, Vishwesh Nath, Jinyi Hu, Sifei Liu, Ranjay Krishna, Daguang Xu, Xiaolong Wang, Pavlo Molchanov, Jan Kautz, Hongxu Yin, Song Han, Yao Lu
cs.AI

摘要

近年来,视觉语言模型(VLMs)在准确性方面取得了显著进展。然而,它们的效率却受到了较少关注。本文介绍了NVILA,这是一组旨在优化效率和准确性的开放式VLMs。在VILA的基础上,我们通过首先增加空间和时间分辨率,然后压缩视觉标记来改进其模型架构。这种“先扩展后压缩”的方法使NVILA能够高效处理高分辨率图像和长视频。我们还进行了系统性调查,以增强NVILA在整个生命周期中的效率,从训练和微调到部署。NVILA在广泛的图像和视频基准测试中与许多领先的开放式和专有VLMs的准确性相匹敌甚至超越。同时,它将训练成本降低了4.5倍,微调内存使用降低了3.4倍,预填充延迟降低了1.6-2.2倍,解码延迟降低了1.2-2.8倍。我们将很快提供我们的代码和模型以促进可重现性。
English
Visual language models (VLMs) have made significant advances in accuracy in recent years. However, their efficiency has received much less attention. This paper introduces NVILA, a family of open VLMs designed to optimize both efficiency and accuracy. Building on top of VILA, we improve its model architecture by first scaling up the spatial and temporal resolutions, and then compressing visual tokens. This "scale-then-compress" approach enables NVILA to efficiently process high-resolution images and long videos. We also conduct a systematic investigation to enhance the efficiency of NVILA throughout its entire lifecycle, from training and fine-tuning to deployment. NVILA matches or surpasses the accuracy of many leading open and proprietary VLMs across a wide range of image and video benchmarks. At the same time, it reduces training costs by 4.5X, fine-tuning memory usage by 3.4X, pre-filling latency by 1.6-2.2X, and decoding latency by 1.2-2.8X. We will soon make our code and models available to facilitate reproducibility.

Summary

AI-Generated Summary

PDF602December 6, 2024