Skip to content

A Simple, High-efficiency, Strong framework for person re-Identification.

License

Notifications You must be signed in to change notification settings

TencentYoutuResearch/PersonReID-YouReID

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Introduction

YouReID is a light research framework that implements some state-of-the-art person re-identification algorithms for some reid tasks and provides some strong baseline models.

Major features

  • Concise and easy: Simple framework, easy to use and customize. You can get started in 5 minutes.
  • Higher efficiency: Mixed precision and DistributedDataParallel training are supported. You can run over the baseline model in 25 minutes using two 16GB V100 on the Market-1501 dataset.
  • Strong: Some baseline methods are supported, including baseline, PCB, MGN. Specially the performance of baseline model arrives mAP=87.65% and rank-1=94.80% on the Market-1501 dataset.
  • Rich model zoo: State-of-the-art methods for some reid tasks like Occluded/UDA/Cross-modal are supported.

Model Zoo

this project provides the following algorithms and scripts to run them. Please see the details in the link provided in the description column

FieldABBRVAlgorithmsDescriptionStatus
SLCACENETDevil's in the Details: Aligning Visual Clues for Conditional Embedding in Person Re-IdentificationCACENET.mdfinished
PyramidPyramidal Person Re-IDentification via Multi-Loss Dynamic TrainingCVPR-2019-Pyramid.mdfinished
Text NAFSContextual Non-Local Alignment over Full-Scale Representation for Text-Based Person SearchNAFS.mdfinished
UDAACTAsymmetric Co-Teaching for Unsupervised Cross Domain Person Re-IdentificationAAAI-2020-ACT.mdcomming soon
Video TSFRethinking Temporal Fusion for Video-based Person Re-identification on Semantic and Time AspectAAAI-2020-TSF.mdcomming soon
3D Person-ReID-3DLearning 3D Shape Feature for Texture-insensitive Person Re-identificationCVPR-2021-PR3D.mdwaited
Occluded PartNetHuman Pose Information Discretization for Occluded Person Re-IdentificationPartNet.mdwaited

You also can find these models in model_zoo Specially we contribute some reid samples to opencv community, you can use these model in opencv, and you also can visit them at ReID_extra_testdata.

Requirements and Preparation

Please install Python>=3.6 and PyTorch>=1.6.0.

Prepare Datasets

Download the public datasets(like market1501 and DukeMTMC), organize these datasets using the following format:

File Directory:

├── partitions.pkl
├── images
│ ├── 0000000_0000_000000.png
│ ├── 0000001_0000_000001.png
│ ├── ...
  1. Rename the images in following convention: "000000_000_000000.png" where the first substring splitted by underline is the person identity; for the second substring, the first digit is the camera id and the rest is track id; and the third substring is an image offset.

  2. "partitions.pkl" file This file contains a python dictionary storing meta data of the datasets, which contains folling key value pairs "train_im_names": [list of image names] #storing a list of names of training images "train_ids2labels":{"identity":label} #a map that maps the person identity string to a integer label "val_im_names": [list of image names] #storing a list of names of validation images "test_im_names": [list of image names] #storing a list of names of testing images "test_marks"/"val_marks": [list of 0/1] #0/1 indicates if an image is in gallery

you can run tools/transform_format.py to get the formatted dataset or download from formatted market1501

Geting Started

Clone this github repository:

git clone this repository

train

  1. Configure basic settings in core/config
  2. Define the network in net and register in the factory.py
  3. Set the corresponding hyperparameters in the example yaml
  4. set example.yaml path in config.yaml
  5. set port and gpu config in cmd.sh
  6. cd train && ./cmd.sh

Quickly Started

cd train && ./cmd.sh

Citation

If you are interested in our works, please cite our papers