Caffe for Sparse and Low-rank Deep Neural Networks
-
Updated
Mar 8, 2020 - C++
Caffe for Sparse and Low-rank Deep Neural Networks
VIP is a python package/library for angular, reference star and spectral differential imaging for exoplanet/disk detection through high-contrast imaging.
Python machine learning applications in image processing, recommender system, matrix completion, netflix problem and algorithm implementations including Co-clustering, Funk SVD, SVD++, Non-negative Matrix Factorization, Koren Neighborhood Model, Koren Integrated Model, Dawid-Skene, Platt-Burges, Expectation Maximization, Factor Analysis, ISTA, F…
Multi-channel Weighted Nuclear Norm Minimization for Real Color Image Denoising, ICCV 2017.
Fine-tuning of diffusion models
Tensorflow implementation of preconditioned stochastic gradient descent
Group Sparsity: The Hinge Between Filter Pruning and Decomposition for Network Compression. CVPR2020.
HiCMA: Hierarchical Computations on Manycore Architectures
Software for Testing Accuracy, Reliability and Scalability of Hierarchical computations.
Pytorch implementation of preconditioned stochastic gradient descent (affine group preconditioner, low-rank approximation preconditioner and more)
Deformable Groupwise Image Registration using Low-Rank and Sparse Decomposition
LoRA (Low-Rank Adaptation) inspector for Stable Diffusion
Small project on numerical linear algebra
Pytorch implemenation of "Learning Filter Basis for Convolutional Neural Network Compression" ICCV2019
Lowrankdensity
Multi-slice MR Reconstruction with Low-Rank Tensor Completion
My experiment of multilayer NMF, a deep neural network in which the first several layers take Semi-NMF as its pseudo-activation-function that finds the latent sturcture embedding in the original data unsupervisely.
MUSCO: Multi-Stage COmpression of neural networks
Deep learning models have become state of the art for natural language processing (NLP) tasks, however deploying these models in production system poses significant memory constraints. Existing compression methods are either lossy or introduce significant latency. We propose a compression method that leverages low rank matrix factorization durin…
Convolutive Matrix Factorization in Julia
Add a description, image, and links to the low-rank-approximation topic page so that developers can more easily learn about it.
To associate your repository with the low-rank-approximation topic, visit your repo's landing page and select "manage topics."