Skip to content

weihaosky/CycleSiam

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CycleSiam

This is the official implementation code for CycleSiam -- "Self-supervised Object Tracking and Segmentation with Cycle-consistent Siamese Networks". It is built based on SiamMask. For technical details, please refer to :

Self-supervised Object Tracking and Segmentation with Cycle-consistent Siamese Networks
Weihao Yuan, Michael Yu Wang, Qifeng Chen
IROS2020
[Paper]

Bibtex

If you find this code useful, please consider citing:

@inproceedings{yuan2020self,
  title={Self-supervised object tracking and segmentation with cycle-consistent siamese networks},
  author={Yuan, Weihao and Wang, Michael Yu and Chen, Qifeng},
  booktitle={Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
  pages={},
  year={2020},
  organization={IEEE}
}

Contents

  1. Environment Setup
  2. Demo
  3. Training

Environment setup

This code has been tested on Ubuntu 16.04, Python 3.6, Pytorch 0.4.1, CUDA 9.2, RTX 2080 GPUs

  • Clone the repository
git clone https://github.com/weihaosky/CycleSiam.git && cd CycleSiam
export CycleSiam=$PWD
  • Setup python environment
conda create -n cyclesiam python=3.6
source activate cyclesiam
pip install -r requirements.txt
bash make.sh
  • Add the project to your PYTHONPATH
export PYTHONPATH=$PWD:$PYTHONPATH

Demo

cd $CycleSiam/experiments/siammask_sharp
  • Run demo.py
cd $CycleSiam/experiments/siammask_sharp
export PYTHONPATH=$PWD:$PYTHONPATH
python ../../tools/demo.py --resume checkpoint_cyclesiam_plus.pth --config config_davis.json

Training

Training Data

Download the pre-trained model (174 MB)

(This model was trained on the ImageNet-1k Dataset)

cd $CycleSiam/experiments
wget http://www.robots.ox.ac.uk/~qwang/resnet.model
ls | grep siam | xargs -I {} cp resnet.model {}

Training CycleSiam base model

  • Setup your environment
  • From the experiment directory, run
cd $CycleSiam/experiments/siammask_base/
bash run.sh
  • If you experience out-of-memory errors, you can reduce the batch size in run.sh.
  • You can view progress on Tensorboard (logs are at <experiment_dir>/logs/)
  • After training, you can test checkpoints on VOT dataset.
bash test_all.sh -s 1 -e 20 -d VOT2018 -g "0 1 2 3"  # test all snapshots with 4 GPUs
  • Select best model for hyperparametric search.
#bash test_all.sh -m [best_test_model] -d VOT2018 -n [thread_num] -g [gpu_num] # 8 threads with 4 GPUS
bash test_all.sh -m snapshot/checkpoint_e18.pth -d VOT2018 -n 8 -g "0 1 2 3" # 8 threads with 4 GPUS

Training CycleSiam model with the Refine module

  • Setup your environment
  • In the experiment file, train with the best CycleSiam base model
cd $CycleSiam/experiments/siammask_sharp
bash run.sh <best_base_model>
bash run.sh checkpoint_e18.pth
  • You can view progress on Tensorboard (logs are at <experiment_dir>/logs/)
  • After training, you can test checkpoints on VOT dataset
bash test_all.sh -s 1 -e 20 -d VOT2018 -g "0 1 2 3"
  • Select best model for hyperparametric search.
#bash test_all.sh -m [best_test_model] -d VOT2018 -n [thread_num] -g [gpu_num] # 8 threads with 4 GPUS
bash test_all.sh -m snapshot/checkpoint_e19.pth -d VOT2018 -n 8 -g "0 1 2 3" # 8 threads with 4 GPUS

Pretrained models

Model VOT2016
EAO / A / R
VOT2018
EAO / A / R
DAVIS2016
J / F
DAVIS2017
J / F
Speed
CycleSiam 0.371 / 0.603 / 0.294 0.294 / 0.562 / 0.389 - / - - / - 59
CycleSiam+ 0.398 / 0.601 / 0.247 0.317 / 0.549 / 0.314 64.9 / 62.0 50.9 / 56.8 44

License

Licensed under an MIT license.

About

The official implementation code for CycleSiam

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published