Skip to content

pytorch realtime multi person keypoint estimation

Notifications You must be signed in to change notification settings

ZexinChen/FastPose

Repository files navigation

FastPose

FastPose is a small and fast multi-person pose estimator which use middle point to do the keypoint grouping. It is the 46% smaller and 47% faster (forward time) than OpenPose (without using existing model compression and acceleration methods like MobileNet, Quantization, etc). For more detail, please refer to the technical report.

Installation

  1. Get the code.
git clone https://github.com/ZexinChen/FastPose.git
  1. Install pytorch 0.4.0 and other dependencies.
pip install -r requirements.txt
  1. Download the models manually: fastpose.pth (Google Drive | Baidu pan). Place it into ./network/weights .

Demo

You can run the code in the ./picture_demo.ipynb to see the demo of your own image by changing test_image path

Training

  1. Prepare COCO dataset:
    a. Download COCO.json (Google Drive | Baidu pan | Dropbox). Place it into ./data/coco/ .
    b. Download mask.tar.gz (Google Drive | Baidu pan). Untar it into ./data/coco/ .
    c. Download COCO dataset (2014)
bash ./training/getData.sh

The data folder should as followed:

-data
   -coco
      -COCO.json
      -mask
      -annotations
      -images
         -train2014
         -val2014
  1. Run the training script. The default should work fine.
CUDA_VISILBE_DIVECES=0,1 python3 train.py

Contributors

FastPose is developed and maintained by Zexin Chen and Yuliang Xiu. Some codes is brought from pytorch-version OpenPose. Thanks to the original authors.

License

FastPose is freely available for free non-commercial use, and may be redistributed under these conditions. For commercial queries, contact Cewu Lu