- GitHub: ZheC/tf-pose-estimation
- GitHub: omasaht/headpose-fsanet-pytorch
- GitHub: jhukaby/webcamjs
After Install environment
- Download the pretrained model from the link of tf-pose-estimation and headpose-fsanet-pytorch and put it in
- src/services/detection/fsanet_pytorch/pretrained
- src/services/detection/tf_pose/models/graph
- src/services/detection/tf_pose/models/pretrained
- Install python3, python3-pip, virtualenv with pip
- Edit ".env" file.
- hasGPU: true if GPU exist, 'None' if GPU doesn't exist
- setup HOST with the host ip
Steps:
-
Build environment with virtualenv.
$ make BuildENV
-
Activate virtualenv.
$ source bin/activate
-
Install dependency packages.
$ make InstallPackage
-
Run test
$ make TEST
-
Run API server
$ make run
API config is 'config.py'.
There are 4 models for tf_pose can be selected in ModelConfig.TF_POSE_TYPE.