ChatPaper.aiChatPaper

感知編碼器:最佳的視覺嵌入並非位於網絡的輸出層

Perception Encoder: The best visual embeddings are not at the output of the network

April 17, 2025
作者: Daniel Bolya, Po-Yao Huang, Peize Sun, Jang Hyun Cho, Andrea Madotto, Chen Wei, Tengyu Ma, Jiale Zhi, Jathushan Rajasegaran, Hanoona Rasheed, Junke Wang, Marco Monteiro, Hu Xu, Shiyu Dong, Nikhila Ravi, Daniel Li, Piotr Dollár, Christoph Feichtenhofer
cs.AI

摘要

我們推出感知編碼器(Perception Encoder, PE),這是一種通過簡單的視覺-語言學習訓練而成、用於圖像和視頻理解的最新編碼器。傳統上,視覺編碼器依賴於多種預訓練目標,每種目標都針對特定的下游任務(如分類、字幕生成或定位)進行定制。令人驚訝的是,在擴展我們精心調校的圖像預訓練方案並通過我們強大的視頻數據引擎進行精煉後,我們發現僅對比視覺-語言訓練就能為所有這些下游任務生成強大且通用的嵌入。只有一個注意事項:這些嵌入隱藏在網絡的中間層中。為了提取它們,我們引入了兩種對齊方法:用於多模態語言建模的語言對齊,以及用於密集預測的空間對齊。結合核心對比檢查點,我們的PE模型家族在各種任務上實現了最先進的性能,包括零樣本圖像和視頻分類與檢索;文檔、圖像和視頻問答;以及檢測、深度估計和跟踪等空間任務。為了促進進一步的研究,我們將發布我們的模型、代碼以及一個包含合成和人工註釋視頻的新數據集。
English
We introduce Perception Encoder (PE), a state-of-the-art encoder for image and video understanding trained via simple vision-language learning. Traditionally, vision encoders have relied on a variety of pretraining objectives, each tailored to specific downstream tasks such as classification, captioning, or localization. Surprisingly, after scaling our carefully tuned image pretraining recipe and refining with our robust video data engine, we find that contrastive vision-language training alone can produce strong, general embeddings for all of these downstream tasks. There is only one caveat: these embeddings are hidden within the intermediate layers of the network. To draw them out, we introduce two alignment methods, language alignment for multimodal language modeling, and spatial alignment for dense prediction. Together with the core contrastive checkpoint, our PE family of models achieves state-of-the-art performance on a wide variety of tasks, including zero-shot image and video classification and retrieval; document, image, and video Q&A; and spatial tasks such as detection, depth estimation, and tracking. To foster further research, we are releasing our models, code, and a novel dataset of synthetically and human-annotated videos.

Summary

AI-Generated Summary

PDF182April 18, 2025