統一文本、音樂和動作生成:UniMuMo
UniMuMo: Unified Text, Music and Motion Generation
October 6, 2024
作者: Han Yang, Kun Su, Yutong Zhang, Jiaben Chen, Kaizhi Qian, Gaowen Liu, Chuang Gan
cs.AI
摘要
我們介紹了UniMuMo,一個統一的多模型,能夠接受任意的文本、音樂和動作數據作為輸入條件,以生成跨越所有三種模態的輸出。為了應對缺乏時間同步數據的問題,我們基於節奏模式對不成對的音樂和動作數據進行對齊,以利用現有的大規模僅音樂和僅動作數據集。通過將音樂、動作和文本轉換為基於標記的表示,我們的模型通過統一的編碼器-解碼器變壓器架構跨越這些模態。為了支持單一框架內的多個生成任務,我們引入了幾個架構改進。我們提出使用音樂碼書對動作進行編碼,將動作映射到與音樂相同的特徵空間。我們引入了一種音樂-動作平行生成方案,將所有音樂和動作生成任務統一到單一變壓器解碼器架構中,並通過單一訓練任務實現音樂-動作聯合生成。此外,該模型通過微調現有的預訓練單模型,顯著降低了計算需求。大量實驗表明,UniMuMo在跨音樂、動作和文本模態的所有單向生成基準上取得了競爭性結果。定量結果可在https://hanyangclarence.github.io/unimumo_demo/{project page}上查看。
English
We introduce UniMuMo, a unified multimodal model capable of taking arbitrary
text, music, and motion data as input conditions to generate outputs across all
three modalities. To address the lack of time-synchronized data, we align
unpaired music and motion data based on rhythmic patterns to leverage existing
large-scale music-only and motion-only datasets. By converting music, motion,
and text into token-based representation, our model bridges these modalities
through a unified encoder-decoder transformer architecture. To support multiple
generation tasks within a single framework, we introduce several architectural
improvements. We propose encoding motion with a music codebook, mapping motion
into the same feature space as music. We introduce a music-motion parallel
generation scheme that unifies all music and motion generation tasks into a
single transformer decoder architecture with a single training task of
music-motion joint generation. Moreover, the model is designed by fine-tuning
existing pre-trained single-modality models, significantly reducing
computational demands. Extensive experiments demonstrate that UniMuMo achieves
competitive results on all unidirectional generation benchmarks across music,
motion, and text modalities. Quantitative results are available in the
https://hanyangclarence.github.io/unimumo_demo/{project page}.Summary
AI-Generated Summary