NVILA:高效前沿視覺語言模型

NVILA: Efficient Frontier Visual Language Models

December 5, 2024
作者: Zhijian Liu, Ligeng Zhu, Baifeng Shi, Zhuoyang Zhang, Yuming Lou, Shang Yang, Haocheng Xi, Shiyi Cao, Yuxian Gu, Dacheng Li, Xiuyu Li, Yunhao Fang, Yukang Chen, Cheng-Yu Hsieh, De-An Huang, An-Chieh Cheng, Vishwesh Nath, Jinyi Hu, Sifei Liu, Ranjay Krishna, Daguang Xu, Xiaolong Wang, Pavlo Molchanov, Jan Kautz, Hongxu Yin, Song Han, Yao Lu
cs.AI

摘要

視覺語言模型(VLMs)近年來在準確性方面取得了顯著進展。然而,它們的效率卻受到較少關注。本文介紹了NVILA,一系列旨在優化效率和準確性的開放式VLMs。在VILA的基礎上,我們通過首先擴大空間和時間分辨率,然後壓縮視覺標記來改進其模型架構。這種“先擴大後壓縮”的方法使NVILA能夠高效處理高分辨率圖像和長視頻。我們還進行了系統性調查,以增強NVILA在整個生命周期中的效率,從訓練和微調到部署。NVILA在廣泛的圖像和視頻基準測試中與許多領先的開放式和專有VLMs的準確性相匹敵或超越。同時,它將訓練成本降低了4.5倍,微調記憶體使用量減少了3.4倍,預填充延遲時間減少了1.6-2.2倍,解碼延遲時間減少了1.2-2.8倍。我們將很快提供我們的代碼和模型以促進可重現性。
English
Visual language models (VLMs) have made significant advances in accuracy in recent years. However, their efficiency has received much less attention. This paper introduces NVILA, a family of open VLMs designed to optimize both efficiency and accuracy. Building on top of VILA, we improve its model architecture by first scaling up the spatial and temporal resolutions, and then compressing visual tokens. This "scale-then-compress" approach enables NVILA to efficiently process high-resolution images and long videos. We also conduct a systematic investigation to enhance the efficiency of NVILA throughout its entire lifecycle, from training and fine-tuning to deployment. NVILA matches or surpasses the accuracy of many leading open and proprietary VLMs across a wide range of image and video benchmarks. At the same time, it reduces training costs by 4.5X, fine-tuning memory usage by 3.4X, pre-filling latency by 1.6-2.2X, and decoding latency by 1.2-2.8X. We will soon make our code and models available to facilitate reproducibility.

Summary

AI-Generated Summary

PDF572December 6, 2024