Skip to content
/ OHTA Public

[CVPR 2024] OHTA: One-shot Hand Avatar via Data-driven Implicit Priors

License

Notifications You must be signed in to change notification settings

zxz267/OHTA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 

Repository files navigation

OHTA: One-shot Hand Avatar via Data-driven Implicit Priors

PICO, ByteDance
*Equal contribution   Corresponding author
🤩 Accepted to CVPR 2024

OHTA is a novel approach capable of creating implicit animatable hand avatars using just a single image. It facilitates 1) text-to-avatar conversion, 2) hand texture and geometry editing, and 3) interpolation and sampling within the latent space.


YouTube

📣 Updates

[02/2024] 🥳 OHTA is accepted to CVPR 2024! Working on code release!

🤟 Citation

If you find our work useful for your research, please consider citing the paper:

@inproceedings{
  zheng2024ohta,
  title={OHTA: One-shot Hand Avatar via Data-driven Implicit Priors},
  author={Zheng, Xiaozheng and Wen, Chao and Zhuo, Su and Xu, Zeran and Li, Zhaohu and Zhao, Yang and Xue, Zhou},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2024}
}

🗞️ License

Distributed under the MIT License. See LICENSE for more information.