ChatPaper.aiChatPaper

雙層運動模仿對於人形機器人

Bi-Level Motion Imitation for Humanoid Robots

October 2, 2024
作者: Wenshuai Zhao, Yi Zhao, Joni Pajarinen, Michael Muehlebach
cs.AI

摘要

從人體動作捕捉(MoCap)數據中進行模仿學習為訓練人形機器人提供了一種有前途的方法。然而,由於形態學上的差異,例如關節自由度和力量限制的不同程度,對於人形機器人來說,精確複製人類行為可能並不可行。因此,在訓練數據集中納入在物理上不可行的MoCap數據可能會對機器人策略的性能產生不利影響。為了解決這個問題,我們提出了一種基於雙層優化的模仿學習框架,該框架在優化機器人策略和目標MoCap數據之間進行交替。具體而言,我們首先使用一種新穎的自洽自編碼器開發了一個生成潛在動力學模型,該模型學習稀疏且結構化的運動表示,同時捕捉數據集中所需的運動模式。然後利用動力學模型生成參考運動,而潛在表示規範了雙層運動模仿過程。通過使用逼真的人形機器人模型進行的模擬顯示,我們的方法通過修改參考運動以使其在物理上一致,增強了機器人策略。
English
Imitation learning from human motion capture (MoCap) data provides a promising way to train humanoid robots. However, due to differences in morphology, such as varying degrees of joint freedom and force limits, exact replication of human behaviors may not be feasible for humanoid robots. Consequently, incorporating physically infeasible MoCap data in training datasets can adversely affect the performance of the robot policy. To address this issue, we propose a bi-level optimization-based imitation learning framework that alternates between optimizing both the robot policy and the target MoCap data. Specifically, we first develop a generative latent dynamics model using a novel self-consistent auto-encoder, which learns sparse and structured motion representations while capturing desired motion patterns in the dataset. The dynamics model is then utilized to generate reference motions while the latent representation regularizes the bi-level motion imitation process. Simulations conducted with a realistic model of a humanoid robot demonstrate that our method enhances the robot policy by modifying reference motions to be physically consistent.

Summary

AI-Generated Summary

PDF12November 16, 2024