Skip to content

MingSun-Tse/Why-the-State-of-Pruning-so-Confusing

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 

Repository files navigation

   

This repository is for our preprint work that deciphers the recent confusing benchmark situation in neural network (filter) pruning:

Why is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network Pruning [Arxiv | Latest PDF]
Huan Wang, Can Qin, Yue Bai, Yun Fu
Northeastern University, Boston, MA, USA

Abstract

The state of neural network pruning has been noticed to be unclear and even confusing for a while, largely due to "a lack of standardized benchmarks and metrics" [3]. To standardize benchmarks, first, we need to answer: what kind of comparison setup is considered fair? This basic yet crucial question has barely been clarified in the community, unfortunately. Meanwhile, we observe several papers have used (severely) sub-optimal hyper-parameters in pruning experiments, while the reason behind them is also elusive. These sub-optimal hyper-parameters further exacerbate the distorted benchmarks, rendering the state of neural network pruning even more obscure.

Two mysteries in pruning represent such a confusing status: the performance-boosting effect of a larger finetuning learning rate, and the no-value argument of inheriting pretrained weights in filter pruning.

In this work, we attempt to explain the confusing state of network pruning by demystifying the two mysteries. Specifically, (1) we first clarify the fairness principle in pruning experiments and summarize the widely-used comparison setups; (2) then we unveil the two pruning mysteries and point out the central role of network trainability, which has not been well recognized so far; (3) finally, we conclude the paper and give some concrete suggestions regarding how to calibrate the pruning benchmarks in the future.

Reproducing our results

TODO: Will update soon. Stay tuned!

Acknowledgments

In this code we refer to the following implementations: Regularization-Pruning, pytorch imagenet example, rethinking-network-pruning, EigenDamage-Pytorch, pytorch_resnet_cifar10. Great thanks to them!

Citation

If our code helps with your research or work, please generously consider to cite our paper, thanks!

@article{wang2023why,
    title={Why is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network Pruning},
    author={Wang, Huan and Qin, Can and Bai, Yue and Fu, Yun},
    journal={arXiv preprint arXiv:2301.05219},
    year={2023},
}

About

[Preprint] Why is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network Pruning

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published