Skip to content

AlexanderMath/fasth

Repository files navigation

FastH

Code accompanying article What if Neural Networks had SVDs? accepted for spotlight presentation at NeurIPS 2020.

UPDATE: We currently recommend using the newer fasth++ algorithm from fasthpp.py which only uses PyTorch (no need to compile CUDA code)!

If, for some reason, you want to use our CUDA code, please see this Google Colab.

Requirements

First, check out how to run the code in Google Colab.

To install locally, run

pip install -r requirements.txt

Check installation by running test cases.

python test_case.py

See test_case.py for expected output.

Minimal Working Example

import torch
from fasth_wrapper import Orthogonal 

class LinearSVD(torch.nn.Module): 
	def __init__(self, d, m=32): 
		super(LinearSVD, self).__init__()
		self.d		  = d

		self.U = Orthogonal(d, m)
		self.D = torch.empty(d, 1).uniform_(0.99, 1.01)
		self.V = Orthogonal(d, m)

	def forward(self, X):
		X = self.U(X)
		X = self.D * X 
		X = self.V(X)
		return X 

bs = 32
d  = 512
neuralSVD = LinearSVD(d=d)
neuralSVD.forward(torch.zeros(d, bs).normal_())

Bibtex

If you use this code, please cite

@inproceedings{fasth,
    title={What If Neural Networks had SVDs?,
    author={Mathiasen, Alexander and Hvilsh{\o}j, Frederik and J{\o}rgensen, Jakob R{\o}dsgaard and Nasery, Anshul and Mottin, Davide},
    booktitle={NeurIPS},
    year={2020}
}

A previous version of the article was presented at the ICML workshop on Invertible Neural Networks and Normalizing Flows. This does not constitute a dual submission because the workshop does not qualify as an archival peer reviewed venue.

About

Code for the article "What if Neural Networks had SVDs?", to be presented as a spotlight paper at NeurIPS 2020.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published