Skip to content

haruishi43/pepper

Repository files navigation

🌶️ Pepper: Yet Another Framework for Image and Video Re-ID

Motivation

There are a couple popular deep learning based ReID frameworks such as torchreid and fastreid. There projects are very helpful for benchmarking SOTA methods as well as implementing new ideas quickly. For my personal projects, I've been heavily using these projects. The one problem that reduced my productivity was how I needed to add configuration defaults for every module that I added. Inspired by OpenMMLab's projects, I created my own modular framework that uses mmcv that significantly reduced this complexity.

Why use this framework?

Key points:

  • Customisable: experiments can be configured easily with the help of mmcv; no more bloated configs! pepper modules can be integrated into other projects such as mmcls and mmdet.
  • Scalable: add modules easily with "registries" and implement new ideas quickly without the hassle of breaking things
  • Fast: training and evaluations are done using distributed processing
  • Robust: borrows and implements techniques from other projects

Other features:

  • supports image and video ReID
  • supports various datasets (including MOT datasets)
  • supports cross-dataset evaluation
  • supports training on multiple datasets
  • multi-process multi-gpu distributed training
  • separate dataset preparation scripts
  • etc...

Notes:

  • I will get around to creating a detailed documentation later, but for now please read the code or reference similar frameworks such as mmcls and mmdet.
  • Please open issues or PR if you spot any bugs or improvements.

Installation

Clone the project:

git clone --recursive git@github.com:haruishi43/pepper.git
cd pepper

Dependencies:

  • torch and torchvision
  • mmcv
  • faiss-gpu

Other dependencies can be installed using the following command:

pip install -r requirements.txt

Installation:

Two options:

  1. Install pepper as a global library:
python setup.py develop
# or
pip install -e .
  1. Install locally:

No need to run any commands except for when you want the optimized evaluation functionality:

cd pepper/core/evaluation/rank_cylib; make all

Preparing for training/evaluation:

Distributed Training (Recommended)

CUDA_VISIBLE_DEVICES=<gpu_ids> ./tools/dist_train.sh <config> <num_gpus>

Projects

TODO:

  • README
  • Documentation
  • Upload model weights
  • Test codes
  • PyPI installation
  • Update to 1.0

Releases

No releases published

Packages

No packages published

Languages