Skip to content

Latest commit

 

History

History
138 lines (93 loc) · 5.73 KB

README.md

File metadata and controls

138 lines (93 loc) · 5.73 KB

OpenBCT

Introduction

OpenBCT is a Non Official Reimplementation for Towards Backward-Compatible Representation Learning (CVPR 2020 oral), which aims at achieving compatibility across different models.

Feature List

  • Baselines for Backward Compatible Training
  • Influence Loss with old classifier
  • Pseudo old classifier generation

Requirements

  • Python >= 3.6
  • Pytorch >= 1.2.0
  • torchvision == 0.2.1
  • numpy
  • sklearn
  • easydict

Datasets

This code conducts training and evaluation on ImageNet LSVRC 2012 and Places365(easy directory structure) datasets. Please be noticed that, due to the privacy issue, this code will NOT provide the train and test code for face datasets in our CVPR 2020 paper.

After you download these two datasets, extract them and make sure there are ./train and ./val folders inside. You can find the training image lists we use from Google Drive Link or Weiyun Link.

Train

For training an old model without any regularization,

python main.py your_dataset_dir --train-img-list imgnet_train_img_list_for_old.txt -a resnet18 

For training a new model with infulence loss (old classifier regularization),

python main.py your_dataset_dir --train-img-list imgnet_train_img_list_for_new.txt -a resnet50 --old-fc your_old_fc_weights_dir --n2o-map ./imgnet_new2old_map.npy

For training a new model with L2 regression loss (one of the compared baseline),

python main.py your_dataset_dir --train-img-list imgnet_train_img_list_for_new.txt -a resnet50 --old-arch resnet18 --old-checkpoint your_old_model_dir --l2 --use-feat

Test

For cross test between two models,

python main.py your_dataset_dir -a resnet50 --pretrained --checkpoint your_new_model_dir --old-fc your_old_fc_weights_dir --use-feat -e --cross-eval --old-arch resnet18 --old-checkpoint your_old_model_dir 

For self test with single model,

python main.py your_dataset_dir -a resnet50 --pretrained --checkpoint your_model_dir --use-feat -e

Train with DDP

For training an old model without any regularization,

python main.py your_dataset_dir --dist-url 'tcp://127.0.0.1:FREEPORT' --dist-backend 'nccl' --multiprocessing-distributed --world-size 1 --rank 0  --train-img-list imgnet_train_img_list_for_old.txt -a resnet18 

For training a new model with infulence loss,

python main.py your_dataset_dir  --dist-url 'tcp://127.0.0.1:FREEPORT' --dist-backend 'nccl' --multiprocessing-distributed --world-size 1 --rank 0  --train-img-list imgnet_train_img_list_for_new.txt -a resnet50 --old-fc your_old_fc_weights_dir --n2o-map ./imgnet_new2old_map.npy

For training a new model with L2 regression loss (one of the compared baseline),

python main.py your_dataset_dir  --dist-url 'tcp://127.0.0.1:FREEPORT' --dist-backend 'nccl' --multiprocessing-distributed --world-size 1 --rank 0 --train-img-list imgnet_train_img_list_for_new.txt -a resnet50 --old-arch resnet18 --old-checkpoint your_old_model_dir --l2 --use-feat

Note: This is for single machine, multi GPUs

Results

Influence loss results on ImagNet are listed as below. Please be noted that, in testing, we use feature L2 distance for Top-k accuracy computing. More results are on the way.

Old Model New Model Val Set Top-1 Acc Top-5 Acc
resnet18 (50%) resnet18 (50%) ImageNet Val 39.5% 60.0%
resnet18 (50%) resnet50-BCT (100%) ImageNet Val 42.2% 65.5%
resnet18 (50%) resnet50-BCT-normed-clsfier (100%) ImageNet Val 46.8% 66.7%
resnet18 (50%) resnet50-L2 (100%) ImageNet Val 13.0% 32.8%
resnet18 (50%) resnet50-Triplet (100%) ImageNet Val 42.9% 63.3%
resnet18 (50%) resnet50-Contra (100%) ImageNet Val 42.7% 63.2%
resnet50-L2 (100%) resnet50-L2 (100%) ImageNet Val 43.8% 64.4%
resnet50-Triplet (100%) resnet50-Triplet (100%) ImageNet Val 53.7% 74.3%
resnet50-Contra (100%) resnet50-Contra (100%) ImageNet Val 57.0% 76.3%
resnet50-BCT (100%) resnet50-BCT (100%) ImageNet Val 55.6% 76.6%
resnet50 (100%) resnet50 (100%) ImageNet Val 66.3% 84.0%
Old Model New Model Val Set Top-1 Acc Top-5 Acc
resnet18 (50%) resnet18 (50%) Places365 Val 27.0% 55.9%
resnet18 (50%) resnet50-BCT (100%) Places365 Val 27.5% 57.8%
resnet50-BCT (100%) resnet50-BCT (100%) Places365 Val 32.9% 62.2%
resnet50 (100%) resnet50 (100%) Places365 Val 35.1% 64.0%

In this table, x% denotes the training data usage amount.

Acknowledgement

The code is based on Open-ReID and Pytorch-ImageNet-Example. Thank these researchers for sharing their great code!

Citation

If this code helps your research or project, please cite

@inproceedings{shen2020towards,
  title={Towards backward-compatible representation learning},
  author={Shen, Yantao and Xiong, Yuanjun and Xia, Wei and Soatto, Stefano},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={6368--6377},
  year={2020}
}

Contact

If you have any question, please feel free to contact

Yantao Shen: ytshen@link.cuhk.edu.hk