Skip to content

Latest commit

 

History

History

v2v-posenet

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

V2V-PoseNet

Voxel-to-Voxel Prediction Network for Accurate 3D Hand and Human Pose Estimation from a Single Depth Map.

Input

Input

(The depth map from MSRA Hand Gesture Dataset)

Output

  • inference

Output

  • ground-truth

Output

Usage

For the various examples other than the sample image, it is necessary to download MSRA Hand Gesture Dataset and extract to msra_dataset directory as below.

v2v-posenet
├── msra_dataset
  ├── P0
  ...
  ├── P3
    ├── 1
      ├── 000000_depth.bin
      ├── 000001_depth.bin
      ...
      ├── joint.txt
    ├── 2
    ...

And also it use the precomputed centers.
The precomputed centers are obtained by training the hand center estimation network from DeepPrior++.

Automatically downloads the onnx and prototxt files on the first run. It is necessary to be connected to the Internet while downloading.

For the sample image,

$ python3 v2v-posenet.py

If you want to specify the input depthmap, put the image path after the --input option.

$ python3 v2v-posenet.py --input DEPTH_MAP

You can draw ground-truth keypoints by specifying the --gt option.

$ python3 v2v-posenet.py --gt

Reference

Framework

Pytorch

Model Format

ONNX opset = 11

Netron