Mogo:RQ分层因果Transformer用于高质量3D人体运动生成

Mogo: RQ Hierarchical Causal Transformer for High-Quality 3D Human Motion Generation

December 5, 2024
作者: Dongjie Fu
cs.AI

摘要

在文本生成动作领域,Bert类型的Masked Models(MoMask,MMM)目前产生比GPT类型的自回归模型(T2M-GPT)更高质量的输出。然而,这些Bert类型模型通常缺乏视频游戏和多媒体环境中所需的流式输出能力,这是GPT类型模型固有的特性。此外,它们在超出分布生成方面表现较弱。为了超越BERT类型模型的质量,同时利用GPT类型结构,而不添加使数据扩展复杂化的额外细化模型,我们提出了一种新颖的架构,Mogo(Motion Only Generate Once),通过训练单个Transformer模型生成高质量逼真的3D人体动作。Mogo仅由两个主要组件组成:1)RVQ-VAE,一种分层残差向量量化变分自编码器,将连续运动序列以高精度离散化;2)分层因果Transformer,负责以自回归方式生成基础运动序列,同时推断不同层次之间的残差。实验结果表明,Mogo可以生成长达260帧(13秒)的连续和循环运动序列,超过了现有数据集如HumanML3D的196帧(10秒)长度限制。在HumanML3D测试集上,Mogo实现了0.079的FID分数,优于GPT类型模型T2M-GPT(FID = 0.116)、AttT2M(FID = 0.112)和BERT类型模型MMM(FID = 0.080)。此外,我们的模型在超出分布生成方面实现了最佳的定量性能。
English
In the field of text-to-motion generation, Bert-type Masked Models (MoMask, MMM) currently produce higher-quality outputs compared to GPT-type autoregressive models (T2M-GPT). However, these Bert-type models often lack the streaming output capability required for applications in video game and multimedia environments, a feature inherent to GPT-type models. Additionally, they demonstrate weaker performance in out-of-distribution generation. To surpass the quality of BERT-type models while leveraging a GPT-type structure, without adding extra refinement models that complicate scaling data, we propose a novel architecture, Mogo (Motion Only Generate Once), which generates high-quality lifelike 3D human motions by training a single transformer model. Mogo consists of only two main components: 1) RVQ-VAE, a hierarchical residual vector quantization variational autoencoder, which discretizes continuous motion sequences with high precision; 2) Hierarchical Causal Transformer, responsible for generating the base motion sequences in an autoregressive manner while simultaneously inferring residuals across different layers. Experimental results demonstrate that Mogo can generate continuous and cyclic motion sequences up to 260 frames (13 seconds), surpassing the 196 frames (10 seconds) length limitation of existing datasets like HumanML3D. On the HumanML3D test set, Mogo achieves a FID score of 0.079, outperforming both the GPT-type model T2M-GPT (FID = 0.116), AttT2M (FID = 0.112) and the BERT-type model MMM (FID = 0.080). Furthermore, our model achieves the best quantitative performance in out-of-distribution generation.

Summary

AI-Generated Summary

PDF112December 12, 2024