[NeurIPS 2021] Official PyTorch Code of Scaling Up Exact Neural Network Compression by ReLU Stability
-
Updated
Mar 27, 2022 - Python
[NeurIPS 2021] Official PyTorch Code of Scaling Up Exact Neural Network Compression by ReLU Stability
collection of works aiming at reducing model sizes or the ASIC/FPGA accelerator for machine learning
李宏毅教授 ML 2020 機器學習課程筆記 & 實作
Code implementation of our AISTATS'21 paper "Mirror Descent View for Neural Network Quantization"
2020 INTERSPEECH, "Sparse Mixture of Local Experts for Efficient Speech Enhancement".
Overparameterization and overfitting are common concerns when designing and training deep neural networks. Network pruning is an effective strategy used to reduce or limit the network complexity, but often suffers from time and computational intensive procedures to identify the most important connections and best performing hyperparameters. We s…
Deep Neural Network Compression based on Student-Teacher Network
Homework for Machine Learning (2019, Spring) at NTU
MUSCO: Multi-Stage COmpression of neural networks
Pytorch implemenation of "Learning Filter Basis for Convolutional Neural Network Compression" ICCV2019
This repository consists of application of Deep Learning Models like DNN, CNN (1D and 2D), RNN (LSTM and GRU) and Variational Autoencoders written from scratch in tensorflow.
💍 Efficient tensor decomposition-based filter pruning
AIMET GitHub pages documentation
Code for "Variational Depth Search in ResNets" (https://arxiv.org/abs/2002.02797)
This is the official implementation of "DHP: Differentiable Meta Pruning via HyperNetworks".
Knowledge Distillation with Adversarial Samples Supporting Decision Boundary (AAAI 2019)
Group Sparsity: The Hinge Between Filter Pruning and Decomposition for Network Compression. CVPR2020.
Add a description, image, and links to the network-compression topic page so that developers can more easily learn about it.
To associate your repository with the network-compression topic, visit your repo's landing page and select "manage topics."