This library is meant to accelerate neural net training by initially starting at a high sparsity so that gradient updates go to important weights, then progressively reducing sparsity to fine-tune the model. The process is explained in this paper submitted to ICASSP 2023. It works best for training models which require many (>20) epochs on a limited dataset, since we make one pass through the data in the beginning to assess weight importance.
Extensive experiments show that the ideal sparsity schedule might be harder to tune than the learning rate schedule while offering roughly the same benefits, and requires very high learning rates or redundant network components to work. Therefore, I stopped working on this.
Nevertheless -- if you are interested in using SNIP, this is a working PyTorch implementation.