MotionGPT: Human Motion as Foreign Language

Fudan University1Fudan University
Tencent PCG2Tencent PCG
ShanghaiTech University3ShanghaiTech University

Paper
Abstract

Though the advancement of pre-trained large language models unfolds, the exploration of building a unified model for language and other multimodal data, such as motion, remains challenging and untouched so far. Fortunately, human motion displays a semantic coupling akin to human language, often perceived as a form of body language. By fusing language data with large-scale motion models, motion-language pre-training that can enhance the performance of motion-related tasks becomes feasible. Driven by this insight, we propose MotionGPT, a unified, versatile, and user-friendly motion-language model to handle multiple motion-relevant tasks. Specifically, we employ the discrete vector quantization for human motionand transfer 3D motion into motion tokens, similar to the generation process ofword tokens. Building upon this “motion vocabulary”, we perform language modeling on both motion and text in a unified manner, treating human motion as a specific language. Moreover, inspired by prompt learning, we pre-train MotionGPT with a mixture of motion-language data and fine-tune it on prompt-based question-and-answer tasks. Extensive experiments demonstrate that MotionGPT achieves state-of-the-art performances on multiple motion tasks including text-driven motion generation, motion captioning, motion prediction, and motion in-between.

MotionGPT teaserUse PC to view this page for more details.

Q: "Please explain the movement shown in [Motion_tokens] using natural language."

A: "a person walks forward then does a backwards z-shape movement to its left side. then back to the right."

Q: "What kind of action is being represented in [Motion_tokens]? Explain it in text."

A: "a person catches a ball, winds up and throws the ball then stops standing sideways."

Q: "Generate text for [Motion_tokens]:"

A: "a person is kick boxing and holding his arms in front of his chest."

Q: "Provide a summary of the motion demonstrated in [Motion_tokens] using words."

A: "a person lifts and bends their left leg/knee, then sweeps the leg in a counterclockwise motion back to the starting position, then repeats that process once more."

Q: "What action is depicted in [Motion_tokens]? Please explain it in text."

A: "a person takes two steps forward then turns to their right 180 degrees and takes two steps away."

Q: "Provide a summary of the action depicted in [Motion_tokens] using language."

A: "the man shakes something places it on the table and steps backwards."

Q: "Explain the motion demonstrated in [Motion_tokens] using natural language."

A: "a person holds on to things on both sides while carefully walking."

Q: "Describe the motion represented by [Motion_tokens] in plain English."

A: "a person kicks two times on his left then kicks forward two times"

Our
Method

To involve large language data and models in the motion generation tasks, we propose a unified motion-language framework named MotionGPT. MotionGPT consists of a motion tokenizer responsible for converting raw motion data into discrete motion tokens, as well as a motion-aware language model that learns to understand the motion tokens from large language pre-training models by corresponding textual descriptions.

MotionGPT pipeline

Comparision
Results

Citation

If you find our code or paper helps, please consider citing:

@article{jiang2023motiongpt,
    title={MotionGPT: Human Motion as a Foreign Language},
    author={Jiang, Biao and Chen, Xin and Liu, Wen and Yu, Jingyi and Yu, Gang and Chen, Tao},
    journal={arXiv preprint arXiv:2306.14795},
    year={2023}
}
@article{jiang2023motiongpt,
    title={MotionGPT: Human Motion as a
      Foreign Language},
    author={Jiang, Biao and Chen, Xin
      and Liu, Wen and Yu, Jingyi 
      and Yu, Gang and Chen, Tao},
    journal={arXiv preprint
      arXiv:2306.14795},
    year={2023}
  }

Made with Next.js, Tailwind CSS and shadcn/ui. Icons from Lucide. Style inspired by RERENDER A VIDEO.