ChatPaper.aiChatPaper

TransAgent:通過異質代理協作傳輸視覺語言基礎模型

TransAgent: Transfer Vision-Language Foundation Models with Heterogeneous Agent Collaboration

October 16, 2024
作者: Yiwei Guo, Shaobin Zhuang, Kunchang Li, Yu Qiao, Yali Wang
cs.AI

摘要

視覺語言基礎模型(如CLIP)最近展示了它們在遷移學習中的威力,這歸功於大規模圖像文本預訓練。然而,下游任務中的目標領域數據可能與預訓練階段大不相同,這使得單一模型很難很好地泛化。相反,存在著各種專家模型,這些模型包含在不同的模態、任務、網絡和數據集上預先訓練的多樣化視覺和/或語言知識。不幸的是,這些模型是“孤立代理”,具有異構結構,如何整合它們的知識以實現泛化CLIP模型尚未得到充分探索。為了彌合這一差距,我們提出了一個通用而簡潔的TransAgent框架,以統一的方式傳輸孤立代理的知識,並有效地引導CLIP通過多源知識蒸餾進行泛化。通過這樣一個獨特的框架,我們靈活地與11個異構代理合作,以增強視覺語言基礎模型,而在推理階段無需進一步成本。最終,我們的TransAgent在11個視覺識別數據集上實現了最先進的性能。在相同的低樣本設置下,它平均優於流行的CoOp約10%,在包含大型領域變化的EuroSAT上則高達20%。
English
Vision-language foundation models (such as CLIP) have recently shown their power in transfer learning, owing to large-scale image-text pre-training. However, target domain data in the downstream tasks can be highly different from the pre-training phase, which makes it hard for such a single model to generalize well. Alternatively, there exists a wide range of expert models that contain diversified vision and/or language knowledge pre-trained on different modalities, tasks, networks, and datasets. Unfortunately, these models are "isolated agents" with heterogeneous structures, and how to integrate their knowledge for generalizing CLIP-like models has not been fully explored. To bridge this gap, we propose a general and concise TransAgent framework, which transports the knowledge of the isolated agents in a unified manner, and effectively guides CLIP to generalize with multi-source knowledge distillation. With such a distinct framework, we flexibly collaborate with 11 heterogeneous agents to empower vision-language foundation models, without further cost in the inference phase. Finally, our TransAgent achieves state-of-the-art performance on 11 visual recognition datasets. Under the same low-shot setting, it outperforms the popular CoOp with around 10% on average, and 20% on EuroSAT which contains large domain shifts.

Summary

AI-Generated Summary

PDF42November 16, 2024