Skip to content

jiahuei/sparse-image-captioning

Repository files navigation

Learning to Prune Image Captioning Models

Tests Black Documentation Status

PDF | ArXiv

Official pytorch implementation of the paper: "End-to-End Supermask Pruning: Learning to Prune Image Captioning Models"

Published at Pattern Recognition, Elsevier

Released on July 20, 2021

Description

This work explores model pruning for image captioning task at the first time. Empirically, we show that 80% to 95% sparse networks can either match or even slightly outperform their dense counterparts. In order to promote Green Computer Vision, we release the pre-trained sparse models for UD and ORT that are capable of achieving CIDEr scores >120 on MS-COCO dataset; yet are only 8.7 MB (reduction of 96% compared to dense UD) and 14.5 MB (reduction of 94% compared to dense ORT) in model size.

Figure 1: We show that our deep captioning networks with 80% to 95% sparse are capable of either matching or even slightly outperforming their dense counterparts.

Get Started

Please refer to the documentation.

Features

Pre-trained Sparse and ACORT Models

The checkpoints are available at this repo.

Soft-attention models implemented in TensorFlow 1.9 are available at this repo.

CIDEr score of pruning methods

Up-Down (UD)

Sparsity NNZ Dense Baseline SMP Lottery ticket (class-blind) Lottery ticket (class-uniform) Lottery ticket (gradual) Gradual pruning Hard pruning (class-blind) Hard pruning (class-distribution) Hard pruning (class-uniform) SNIP
0.950 2.7 M 111.3 112.5 - 107.7 109.5 109.7 - 110.0 110.2 38.2
0.975 1.3 M 111.3 110.6 - 103.8 106.6 107.0 - 105.9 105.4 34.7
0.988 0.7 M 111.3 109.0 - 99.3 102.2 103.4 - 101.3 100.5 32.6
0.991 0.5 M 111.3 107.8

Object Relation Transformer (ORT)

Sparsity NNZ Dense Baseline SMP Lottery ticket (gradual) Gradual pruning Hard pruning (class-blind) Hard pruning (class-distribution) Hard pruning (class-uniform) SNIP
0.950 2.8 M 114.7 113.7 115.7 115.3 4.1 112.5 113.0 47.2
0.975 1.4 M 114.7 113.7 112.9 113.2 0.7 106.6 106.9 44.0
0.988 0.7 M 114.7 110.7 109.8 110.0 0.9 96.9 59.8 37.3
0.991 0.5 M 114.7 109.3 107.1 107.0

Acknowledgements

Citation

If you find this work useful for your research, please cite

@article{tan2021end,
  title={End-to-End Supermask Pruning: Learning to Prune Image Captioning Models},
  author={Tan, Jia Huei and Chan, Chee Seng and Chuah, Joon Huang},
  journal={Pattern Recognition},
  pages={108366},
  year={2021},
  publisher={Elsevier},
  doi={10.1016/j.patcog.2021.108366}
}

Feedback

Suggestions and opinions on this work (both positive and negative) are greatly welcomed. Please contact the authors by sending an email to tan.jia.huei at gmail.com or cs.chan at um.edu.my.

License and Copyright

The project is open source under BSD-3 license (see the LICENSE file).

©2021 Universiti Malaya.

Dev Info

Run Black linting:

black --line-length=120 --safe sparse_caption
black --line-length=120 --safe tests
black --line-length=120 --safe scripts