物理學基礎的巧緻吉他演奏中的雙手同步
Synchronize Dual Hands for Physics-Based Dexterous Guitar Playing
September 25, 2024
作者: Pei Xu, Ruocheng Wang
cs.AI
摘要
我們提出了一種新穎的方法來合成需要兩隻手高度時間精確協調控制的任務中,物理模擬手部的靈巧運動。我們的方法並非直接學習控制兩隻手的聯合策略,而是通過合作學習來進行雙手控制,其中每隻手被視為一個獨立的代理人。首先分別訓練每隻手的個別策略,然後通過中央環境中的潛在空間操作將其同步,以作為雙手控制的聯合策略。通過這樣做,我們避免了直接在具有更高維度的兩隻手聯合狀態-行動空間中進行策略學習,從而大大提高了整體訓練效率。我們在具有挑戰性的吉他演奏任務中展示了我們提出的方法的有效性。通過我們的方法訓練的虛擬吉他手可以從一般吉他演奏練習動作的非結構參考數據中合成運動,並根據不存在於參考中的輸入吉他譜來準確演奏具有複雜和弦按壓和彈奏模式的多樣節奏。除本文外,我們還提供了我們收集的運動捕捉數據作為策略訓練的參考。代碼可在以下網址找到:https://pei-xu.github.io/guitar。
English
We present a novel approach to synthesize dexterous motions for physically
simulated hands in tasks that require coordination between the control of two
hands with high temporal precision. Instead of directly learning a joint policy
to control two hands, our approach performs bimanual control through
cooperative learning where each hand is treated as an individual agent. The
individual policies for each hand are first trained separately, and then
synchronized through latent space manipulation in a centralized environment to
serve as a joint policy for two-hand control. By doing so, we avoid directly
performing policy learning in the joint state-action space of two hands with
higher dimensions, greatly improving the overall training efficiency. We
demonstrate the effectiveness of our proposed approach in the challenging
guitar-playing task. The virtual guitarist trained by our approach can
synthesize motions from unstructured reference data of general guitar-playing
practice motions, and accurately play diverse rhythms with complex chord
pressing and string picking patterns based on the input guitar tabs that do not
exist in the references. Along with this paper, we provide the motion capture
data that we collected as the reference for policy training. Code is available
at: https://pei-xu.github.io/guitar.Summary
AI-Generated Summary