Skip to content

mingyuan-zhang/MotionDiffuse

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 

Repository files navigation

MotionDiffuse: Text-Driven Human Motion Generation with Diffusion Model

1S-Lab, Nanyang Technological University  2SenseTime Research 
*equal contribution  +corresponding author
play the guitar walk sadly walk happily check time

This repository contains the official implementation of MotionDiffuse: Text-Driven Human Motion Generation with Diffusion Model.


Updates

[10/2022] Add a 🤗Hugging Face Demo for text-driven motion generation!

[10/2022] Add a Colab Demo for text-driven motion generation! Open In Colab

[10/2022] Code release for text-driven motion generation!

[8/2022] Paper uploaded to arXiv. arXiv

Text-driven Motion Generation

You may refer to this file for detailed introduction.

Citation

If you find our work useful for your research, please consider citing the paper:

@article{zhang2022motiondiffuse,
  title={MotionDiffuse: Text-Driven Human Motion Generation with Diffusion Model},
  author={Zhang, Mingyuan and Cai, Zhongang and Pan, Liang and Hong, Fangzhou and Guo, Xinying and Yang, Lei and Liu, Ziwei},
  journal={arXiv preprint arXiv:2208.15001},
  year={2022}
}

Acknowledgements

This study is supported by NTU NAP, MOE AcRF Tier 2 (T2EP20221-0033), and under the RIE2020 Industry Alignment Fund – Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s).

Releases

No releases published

Packages

No packages published

Languages