About Code for the paper "NASH: A Simple Unified Framework of Structured Pruning for Accelerating Encoder-Decoder Language Models" (EMNLP 2023 Findings)
-
Updated
Oct 17, 2023 - Python
About Code for the paper "NASH: A Simple Unified Framework of Structured Pruning for Accelerating Encoder-Decoder Language Models" (EMNLP 2023 Findings)
Deepak Ghimire, Kilho Lee, and Seong-heum Kim, Loss-aware automatic selection of structured pruning criteria for deep neural network acceleration, Image and Vision Computing, vol. 136, p. 104745, 2023.
The framework to prune LLMs to any size and any config.
Code repository for paper "Efficient Structured Pruning and Architecture Searching for Group Convolution" https://arxiv.org/abs/1811.09341
[AAAI 2024] Fluctuation-based Adaptive Structured Pruning for Large Language Models
💍 Efficient tensor decomposition-based filter pruning
This repository is the official implementation of the paper Pruning via Iterative Ranking of Sensitivity Statistics and implements novel pruning / compression algorithms for deep learning / neural networks. Amongst others it implements structured pruning before training, its actual parameter shrinking and unstructured before/during training.
We have implemented a framework that supports developers to structured prune neural networks of Tensorflow Models
Code for CHIP: CHannel Independence-based Pruning for Compact Neural Networks (NeruIPS 2021).
[NeurIPS 2023] Structural Pruning for Diffusion Models
Awasome Papers and Resources in Deep Neural Network Pruning with Source Code.
Knowledge distillation from Ensembles of Iterative pruning (BMVC 2020)
Add a description, image, and links to the structured-pruning topic page so that developers can more easily learn about it.
To associate your repository with the structured-pruning topic, visit your repo's landing page and select "manage topics."