Skip to content

Ankur-Deka/Machine-Learning-Notebooks

Repository files navigation

Jupyter Notebooks on Machine Learning

I implement some interesting Machine Learning topics in Jupyter Notebooks, mostly from scratch. The focus is on understanding rather than achieving the state-of-the-art results. So far I have implemented Gaussian Process, Variational Auto Encoder (VAE), Natural Gradient, Bayesian Linear Regression and Learning Trigonometric Functions using Neural Networks. I hope these help you understand the topics better.

  1. Gaussian Process.ipynb: Gaussian Process.

  2. VAE.ipynb: Variational Autoencoder for MNIST.

  3. Natural Gradient.ipynb: Natural gradient to learn the parameters of a 1D Gaussian. Empirically I observe that natural gradient ascent converges faster than simple gradient ascent. Moreover, the KL divergence between 2 likelihood functions at consecutive training steps remains roughly the same during training. For theory on Natural Gradient I recommend reading Agustinus Kristladl's Blog

  4. Bayesian Linear Regression.ipynb: Bayesian linear regression for 1D data using Gaussian form for both prior and likelihood, and known variance of likelihood term. The confidence of our model increases with more data points. This really demonstrates the power of Bayesian learning - when we have less data model itself tells us that it is less confident! For theory on Bayesian Linear Regression, here is a great video. Note that I used a slightly different notation in my code.

  1. learn_transform.py: Can a neural network learn forward and inverse trigonometric (sine/cosine/tangent) functions? My conclusion is that it can learn it in and around the regions where it has seen the data. Sigmoid activation works better than ReLU.