Skip to content

The PyTorch Code for our ICCV 2019 paper "Integral Object Mining via Online Attention Accumulation"

Notifications You must be signed in to change notification settings

PengtaoJiang/OAA-PyTorch

Repository files navigation

OAA-PyTorch

The Official PyTorch code for "Integral Object Mining via Online Attention Accumulation", which is implemented based on the code of psa and ACoL. The segmentation framework is borrowed from deeplab-pytorch.

Installation

python3
torch >= 1.0
tqdm
torchvision
python-opencv

Download the VOCdevkit.tar.gz file and extract it into data/ folder.

Online Attention Accumulation

cd OAA-PyTorch/
./train.sh 

After the training process, you can resize the accumulated attention map to original image size.

python res.py

For a comparison with the attention maps generated by the final classification model, you can generate them by

./test.sh

Integal Attention Learning

If you want to skip the online attention accumulation process to train the integral model directly, Download the pre-accumulated maps and extract them to exp1/.

./train_iam.sh
./test_iam.sh

Attention Drop Layer

./train+.sh 

After the training process, you can resize the accumulated attention map to original image size.

python res.py

Weakly Supervised Segmentation

To train a segmentation model, you need to generate pseudo segmentation labels first by

python gen_gt.py

This code will generate pseudo segmentation labels in './data/VOCdevkit/VOC2012/proxy-gt/'. Then you can train the deeplab-pytorch model as follows:

cd deeplab-pytorch
bash scripts/setup_caffemodels.sh
python convert.py --dataset coco
python convert.py --dataset voc12

Train the segmentation model by

python main.py train \
      --config-path configs/voc2012.yaml

Test the segmentation model by

python main.py test \
    --config-path configs/voc12.yaml \
    --model-path data/models/voc12/deeplabv2_resnet101_msc/train_aug/checkpoint_final.pth

Apply the crf post-processing by

python main.py crf \
    --config-path configs/voc12.yaml

Performance

Method mIoU mIoU (crf)
OAA 65.7 66.9
OAA+ 66.6 67.8
OAA-drop 67.5 68.8

If you have any question about OAA, please feel free to contact Me (pt.jiang AT mail DOT nankai.edu.cn).

Citation

If you use these codes and models in your research, please cite:

@inproceedings{jiang2019integral,   
      title={Integral Object Mining via Online Attention Accumulation},   
      author={Jiang, Peng-Tao and Hou, Qibin and Cao, Yang and Cheng, Ming-Ming and Wei, Yunchao and Xiong, Hong-Kai},   
      booktitle={Proceedings of the IEEE International Conference on Computer Vision},   
      pages={2070--2079},   
      year={2019} 
}
@article{jiang2021online,
  title={Online Attention Accumulation for Weakly Supervised Semantic Segmentation},
  author={Jiang, Peng-Tao and Han, Ling-Hao and Hou, Qibin and Cheng, Ming-Ming and Wei, Yunchao},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  year={2021},
  publisher={IEEE}
}

License

The code is released under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License for NonCommercial use only. Any commercial use should get formal permission first.

About

The PyTorch Code for our ICCV 2019 paper "Integral Object Mining via Online Attention Accumulation"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published