Skip to content

duanzhiihao/RAPiD

Repository files navigation

RAPiD

This repository is the official PyTorch implementation of the following paper. Our code can reproduce the training and testing results reported in the paper.

RAPiD: Rotation-Aware People Detection in Overhead Fisheye Images
[arXiv paper] [Project page]

Updates

  • [Oct 15, 2020]: Add instructions for training on COCO
  • [Oct 15, 2020]: Add instructions for evaulation

Installation

Requirements: The code should be able to work as long as you have the following packages:

An exmpale of Installation with Linux, CUDA10.1, and Conda:

conda create --name RAPiD_env python=3.7
conda activate RAPiD_env

conda install pytorch torchvision cudatoolkit=10.1 -c pytorch
conda install -c conda-forge pycocotools
conda install tqdm opencv

# cd the_folder_to_install
git clone https://github.com/duanzhiihao/RAPiD.git

Performance and pre-trained network weights

Below is the cross-validatation performance on three datasets: Mirror Worlds-rotated bbox version, HABBOF, and CEPDOF. The metric being used is Average Precision at IoU=0.5 (AP0.5). The links in the table refer to the pre-trained network weights that can reproduce each number.

Resolution MW-R HABBOF CEPDOF
608 96.6 97.3 82.4
1024 96.7 98.1 85.8

A minimum guide for testing on a single image

  1. Clone the repository
  2. Download the pre-trained network weights, which is trained on COCO, MW-R and HABBOF, and place it under the RAPiD/weights folder.
  3. Directly run python example.py. Alternatively, demo.ipynb gives an example using jupyter notebook.

Evaluation

Here is a minimum example of evaluting RAPiD on a single image in terms of the AP metric.

  1. Clone repository. Download the pre-trained network weights, which is trained on COCO, MW-R and HABBOF, and place it under the RAPiD/weights folder.
  2. python evaluate.py --metric AP

The same evaluation process holds for published fisheye datasets like CEPDOF. For example, python evaluate.py --imgs_path path/to/cepdof/Lunch1 --gt_path path/to/cepdof/annotations/Lunch1.json --metric AP

Training on COCO

  1. Download the Darknet-53 weights, which is pre-trained on ImageNet. This is identical to the one provided by the official YOLOv3 author. The only diffence is that I converted it to the PyTorch format.
  2. Place the weights file under the RAPiD/weights folder;
  3. Download the COCO dataset and put it at path/to/COCO
  4. Modify line 59-61 in train.py to the following code snippet. Note that there must be a 'COCO' in the path/to/COCO. Modify the validation set path too if you like.
if args.dataset == 'COCO':
    train_img_dir = 'path/to/COCO/train2017'
    assert 'COCO' in train_img_dir # issue #11
    train_json = 'path/to/COCO/annotations/instances_train2017.json'
  1. python train.py --model rapid_pL1 --dataset COCO --batch_size 8 should work. Try to set the largest possible batch size that can fit in the GPU memory.

Pre-trained checkpoint on COCO after 20k training iterations: download. Note that this is different from the one we reported in the paper. We encourage you to further fine-tune it, either on COCO (ideally >100k iterations) or on fisheye images, to get better performance.

Fine-tuning on fisheye image datasets

TBD

TODO

  • Update README

Citation

RAPiD source code is available for non-commercial use. If you find our code and dataset useful or publish any work reporting results using this source code, please consider citing our paper

Z. Duan, M.O. Tezcan, H. Nakamura, P. Ishwar and J. Konrad, 
“RAPiD: Rotation-Aware People Detection in Overhead Fisheye Images”, 
in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 
Omnidirectional Computer Vision in Research and Industry (OmniCV) Workshop, June 2020.

About

RAPiD: Rotation-Aware People Detection in Overhead Fisheye Images (CVPR 2020 Workshops)

Resources

License

Stars

Watchers

Forks

Packages

No packages published