Skip to content

[ICLR 2023] "Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together!" Shiwei Liu, Tianlong Chen, Zhenyu Zhang, Xuxi Chen, Tianjin Huang, AJAY KUMAR JAISWAL, Zhangyang Wang

VITA-Group/SMC-Bench

Repository files navigation

Official PyTorch implementation of SMC-Bench - Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together!

Shiwei Liu, Tianlong Chen, Zhenyu Zhang, Xuxi Chen, Tianjin Huang, Ajay Jaiswal, Zhangyang Wang

University of Texas at Austin, Eindhoven University of Technology

The "Sparsity May Cry" Benchmark (SMC-Bench) is a collection of benchmark in pursuit of a more general evaluation and unveiling the true potential of sparse algorithms. SMC-Bench contains carefully curated 4 diverse tasks with 10 datasets, that accounts for capturing a wide-range of domain-specific knowledge.

The benchmark organizers can be contacted at s.liu@tue.nl.

Table of contents


Installation of SMC-Bench

Please check INSTALL.md for installation instructinos.

Training of SMC-Bench

Please check TRAINING.md for installation instructinos.

Tasks Models and Datasets

Specifically, we consider a broad set of tasks including commonsense reasoning, arithmatic reasoning, multilingual translation, and protein prediction, whose content spans multiple domains, requiring a vast amount of commonsense knowledge, solid mathematical and scientific background to solve. Note that none of the datasets in SMC-Bench has been created from scratch for the benchmark, we rely on pre-existing datasets as they have been implicitly agreed by researchers as challenging, interesting, and of high practical value. The models and datasets that we used for SMC-Bench are summarized below.


Sparse Algorithms

After Taining: Lottery Ticket Hypothesis, Magnitude After Training, Random After Training, oBERT.

During Taining: Gradual Magnitude Pruning.

Before Training: Magnitude Before Training, SNIP, Rigging the Lottery, Random Before Training.

Results

Commonsense Reasoning

Arithmatic Reasoning

Protein Property Prediction

Multilingual Translation

About

[ICLR 2023] "Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together!" Shiwei Liu, Tianlong Chen, Zhenyu Zhang, Xuxi Chen, Tianjin Huang, AJAY KUMAR JAISWAL, Zhangyang Wang

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published