Skip to content

Latest commit

 

History

History

evaluation

Evaluations on hand pose estimation

Description

This project provides codes to evaluate performances of hand pose estimation on several public datasets, including NYU, ICVL, MSRA hand pose dataset. We collect predicted labels of some prior work which are available online and visualize the performances.

Evaluation metric

There are two types of evaluation metrics that are widely used for hand pose estimation:

(1) Mean error for each joint

(2) Success rate:

  • The proportion of test frames whose average error falls below a threshold
  • The proportion of test frames whose maximum error falls below a threshold
  • The proportion of all joints whose error falls below a threshold

Methods and corresponding predicted labels

ICVL

NYU

MSRA

[back to top]

Notes

  • Note that only 14 out of 36 joints are used for evaluation and we use the joints with id [0, 3, 6, 9, 12, 15, 18, 21, 24, 25, 27, 30, 31, 32]. All labels are in the format of (u, v, d) where u and v are pixel coordinates.

  • The code to plot errors over different yaw and pitch angles for MSRA is still under construction and needs further improvement. Stay tuned.

  • For Lie-X, the original predicted labels are in format of (x, y, z) and the order of joints is different. We convert the labels from xyz to uvd and permute the order of joints to keep consistent with other methods (see src/convert_results_xyz2uvd_LieX.py).

  • For DenseReg, we convert the original predicted labels from xyz to uvd (see src/convert_results_xyz2uvd_denseReg.py).

  • Since DeepPrior[4] and DeepPrior++[9] only provide predicted labels of Sequence A (702 frames) for ICVL dataset (totally 1596 frames for two test sequences), we haven't included these method in comparisons for ICVL dataset yet.

  • DeepPrior++[9] also provides predicted labels of for MSRA dataset online. However, the results seem to be shuffled so we haven't included these results yet, stay tuned.

  • For 3DCNN, HandPointNet and Point-to-Point, we convert the original predicted labels from xyz to uvd (see src/convert_results_xyz2uvd_Ge.py).

  • The annotations for MSRA dataset for V2V-PoseNet are slightly different from prior work (see the discussions here) so we haven't included its results yet.

[back to top]

Usage

Use the python code to show the evaluation results:

python compute_error.py icvl/nyu/msra max-frame/mean-frame/joint method_names in_files

The first parameter indicates which dataset is being evaluated while the second one indicates which type of success rate that is listed above is being chosen. The following parameters specify the names of methods and their corresponding predict label files.

We provide easy-to-use bash scripts to display performances of some methods, just run the following command:

sh evaluate_{dataset}.sh

[back to top]

Results

Results on NYU dataset

figures/nyu_error_bar.png

Methods 3D Error (mm)
DeepPrior [4] 20.750
DeepPrior-Refine [4] 19.726
DeepModel [2] 17.036
Feedback [5] 15.973
Guo_Baseline [3] 14.588
Lie-X [6] 14.507
DeepHPS [20] 14.415
3DCNN [11] 14.113
REN-4x6x6 [3] 13.393
REN-9x6x6 [7] 12.694
DeepPrior++ [9] 12.238
Pose-REN [8] 11.811
Generalized-Feedback [18] 10.894
SHPR-Net [14] 10.775
HandPointNet [15] 10.540
DenseReg [10] 10.214
CrossInfoNet [19] 10.078
MURAUER [17] 9.466
WHSP-Net [21] 9.421
SHPR-Net (three views) [14] 9.371
Point-to-Point [16] 9.045
V2V-PoseNet [12] 8.419
SRNet [23] 9.173
TriHorn-Net [22] 7.68
FeatureMapping [13] 7.441

[back to top]

Results on ICVL dataset

figures/icvl_error_bar.png

Methods 3D Error (mm)
LRF [1] 12.578
DeepModel [2] 11.561
Guo_Baseline [3] 8.358
REN-4x6x6 [3] 7.628
REN-9x6x6 [7] 7.305
DenseReg [10] 7.239
SHPR-Net [14] 7.219
HandPointNet [15] 6.935
Pose-REN [8] 6.791
CrossInfoNet [19] 6.732
Point-to-Point [16] 6.328
V2V-PoseNet [12] 6.284
SRNet [23] 6.152
TriHorn-Net [22] 5.73

[back to top]

Results on MSRA dataset

figures/msra_error_bar.png

Methods 3D Error (mm)
REN-9x6x6 [7] 9.792
3DCNN [11] 9.584
Pose-REN [8] 8.649
HandPointNet [15] 8.505
CrossInfoNet [19] 7.862
SHPR-Net [14] 7.756
Point-to-Point [16] 7.707
DenseReg [10] 7.234
SRNet [23] 7.985
TriHorn-Net [22] 7.13

[back to top]

Results on HANDS17 challenge dataset

See leaderboard here for sequence based (tracking) and frame based hand pose estimation task.

See leaderboard here for hand-object interaction hand pose estimation task.

[back to top]

Reference

[back to top]