Sparsify Your Flux Models
-
Updated
Sep 20, 2023 - Julia
Sparsify Your Flux Models
CS328 Introduction to Data Science - Prof. Anirban Dasgupta - Project: Sparsifying Networks while Preserving Properties
Feather is a module that enables effective sparsification of neural networks during training. This repository accompanies the paper "Feather: An Elegant Solution to Effective DNN Sparsification" (BMVC2023).
An implementation and report of the twice Ramanujan graph sparsifiers.
The communication efficiency of federated learning is improved by sparsifying the parameters uploaded by the clients.
TensorFlow implementation of weight and unit pruning and sparsification
A simple C++14 and CUDA-based header-only library with tools for sparse-machine learning.
TensorFlow implementation of weight and unit pruning and sparsification
(Unstructured) Weight Pruning via Adaptive Sparsity Loss
Repository to track the progress in model compression and acceleration
A research library for pytorch-based neural network pruning, compression, and more.
Complex-valued neural networks for pytorch and Variational Dropout for real and complex layers.
Code for CRATE (Coding RAte reduction TransformEr).
Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models
Sparsity-aware deep learning inference runtime for CPUs
Add a description, image, and links to the sparsification topic page so that developers can more easily learn about it.
To associate your repository with the sparsification topic, visit your repo's landing page and select "manage topics."