Skip to content
/ PGPT Public

Implementation of ‘Pose-Guided Tracking-by-Detection: Robust Multi-Person Pose Tracking’ [TMM 2020]

License

Notifications You must be signed in to change notification settings

JDAI-CV/PGPT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Pose-Guided Tracking-by-Detection: Robust Multi-Person Pose Tracking

overview

Introduction

This paper addresses the multi-person pose tracking task that aims to estimate and track person pose keypoints in video. We propose a pose-guided tracking-by-detection framework which fuses pose information into both video human detection and data association procedures. Specifically, we adopt the pose-guided single object tracker to exploit the temporal information for making up missing detections in the video human detection stage. Furthermore, we propose a hierarchical pose-guided graph convolutional networks (PoseGCN) based appearance discriminative model in the data association stage. The GCN-based model exploits the human structural relations to boost the person representation.

Overview

  • This the implementation of the Pose-Guided Tracking-by-Detection: Robust Multi-Person Pose Tracking.
  • This repo focuses on the major contribtion of our methods.

Main Results

Results on Posetrack 2017 Datasets comparing with the other methods, the result on Val is 68.4 and on Test is 60.2, which achieves SOTA.

Quick Start

Install

  1. Create an anaconda environment named PGPT whose python ==3.7, and activate it

  2. Install pytorch==0.4.0 following official instuction

  3. Clone this repo, and we'll call the directory that you cloned as ${PGPT_ROOT}

  4. Install dependencies

    pip install -r requirements.txt
    

Demo

  1. Download the demo dataset and demo_val, and put them into the data folder in the following manner:

    ${PGPT_ROOT}
     |--data
         |--demodata
             |--images
             |--annotations
         |--demo_val.json
    
    • You can also use your own data in the same data format and data organization as the demo dataset.
  2. Download the PoseGCN model and Tracker model, and put them into the models folder in the following manner:

    ${PGPT_ROOT}
     |--models
         |--pose_gcn.pth.tar
         |--tracker.pth
    
  3. Download the results of detection for demo, and put the results in the results in the following manner

    • Right now we don't provide the detection and the pose model which we implement. We implement the module based on the Faster RCNN for detection model and Simple Baseline for pose estimation model. You can clone their repo and train your own detection and pose estimation module.

    • In order to smoothly run the demo, we provide demo_detection.json which is the demo results of our detection model. Meanwhile, you can run the demo with your own detection results in the same format as demo_detection.json.

       ${PGPT_ROOT}
        |--results
        	  |--demo_detection.json
      
  4. You can run the demo by the following codes:

    cd ${PGPT_ROOT}
    sh demo.sh
    
    • Store the JSON results on the ${PGPT_ROOT}/results/demo
    • Store the results on the ${PGPT_ROOT}/results/render

Note

  • You can modify the inference/config.py to suit the path of your own.
  • We still arrange the project of our method, and we will release the whole project later.

Citation

If you use this code for your research, please consider citing:

@InProceedings{TMM2020-PGPT,

title = {Pose-Guided Tracking-by-Detection: Robust Multi-Person Pose Tracking},

author = {Q. Bao, W. Liu, Y. Cheng, B. Zhou and T. Mei},

booktitle = { IEEE Transactions on Multimedia},

year = {2020} }

About

Implementation of ‘Pose-Guided Tracking-by-Detection: Robust Multi-Person Pose Tracking’ [TMM 2020]

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published