TensorFlow implementation of weight and unit pruning and sparsification
-
Updated
Nov 14, 2018 - Jupyter Notebook
TensorFlow implementation of weight and unit pruning and sparsification
TensorFlow implementation of weight and unit pruning and sparsification
Repository to track the progress in model compression and acceleration
A simple C++14 and CUDA-based header-only library with tools for sparse-machine learning.
Complex-valued neural networks for pytorch and Variational Dropout for real and complex layers.
(Unstructured) Weight Pruning via Adaptive Sparsity Loss
A research library for pytorch-based neural network pruning, compression, and more.
An implementation and report of the twice Ramanujan graph sparsifiers.
Sparsify Your Flux Models
Feather is a module that enables effective sparsification of neural networks during training. This repository accompanies the paper "Feather: An Elegant Solution to Effective DNN Sparsification" (BMVC2023).
The communication efficiency of federated learning is improved by sparsifying the parameters uploaded by the clients.
Code for CRATE (Coding RAte reduction TransformEr).
Sparsity-aware deep learning inference runtime for CPUs
CS328 Introduction to Data Science - Prof. Anirban Dasgupta - Project: Sparsifying Networks while Preserving Properties
Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models
Add a description, image, and links to the sparsification topic page so that developers can more easily learn about it.
To associate your repository with the sparsification topic, visit your repo's landing page and select "manage topics."