ChatPaper.aiChatPaper

利用異質預訓練Transformer擴展本體感視覺學習

Scaling Proprioceptive-Visual Learning with Heterogeneous Pre-trained Transformers

September 30, 2024
作者: Lirui Wang, Xinlei Chen, Jialiang Zhao, Kaiming He
cs.AI

摘要

如今訓練通用型機器人模型的一大障礙是異質性。先前的機器人學習方法通常收集數據以訓練特定體現的特定任務,這既昂貴又容易過度擬合。本研究探討了通過在不同體現和規模的機器人數據上進行異質性預訓練來學習策略表示的問題。我們提出了異質預訓練變壓器(HPT),它預先訓練一個大型、可共享的策略神經網絡主幹,以學習任務和體現不可知的共享表示。這種通用架構將來自不同體現的特定本體感覺和視覺輸入對齊到一系列短令牌,然後處理這些令牌以映射到不同任務的控制機器人。利用最近的大規模多體現現實世界機器人數據集以及模擬、部署的機器人和人類視頻數據集,我們研究了跨異質性預訓練策略。我們進行實驗來研究培訓目標的擴展行為,涵蓋了高達52個數據集。HPT在多個模擬器基準和現實世界環境中的未見任務上優於幾個基準線,並將微調策略的性能提高了超過20%。有關代碼和視頻,請參見項目網站(https://liruiw.github.io/hpt/)。
English
One of the roadblocks for training generalist robotic models today is heterogeneity. Previous robot learning methods often collect data to train with one specific embodiment for one task, which is expensive and prone to overfitting. This work studies the problem of learning policy representations through heterogeneous pre-training on robot data across different embodiments and tasks at scale. We propose Heterogeneous Pre-trained Transformers (HPT), which pre-train a large, shareable trunk of a policy neural network to learn a task and embodiment agnostic shared representation. This general architecture aligns the specific proprioception and vision inputs from distinct embodiments to a short sequence of tokens and then processes such tokens to map to control robots for different tasks. Leveraging the recent large-scale multi-embodiment real-world robotic datasets as well as simulation, deployed robots, and human video datasets, we investigate pre-training policies across heterogeneity. We conduct experiments to investigate the scaling behaviors of training objectives, to the extent of 52 datasets. HPTs outperform several baselines and enhance the fine-tuned policy performance by over 20% on unseen tasks in multiple simulator benchmarks and real-world settings. See the project website (https://liruiw.github.io/hpt/) for code and videos.

Summary

AI-Generated Summary

PDF142November 13, 2024