Here we play with different types of AutoEncoders: Vanilla AutoEncoder (AE), Variational AutoEncoder (VAE), Vector Quantized Variational AutoEncoder (VQ-VAE).
We try to train models with different training objectives: Unsupervised, Supervised, Semi-Supervised.
We use different DL datasets like: MNIST, CIFAR-10, CIFAR-100, CelebA, etc.
- Original VAE paper https://arxiv.org/abs/1312.6114
- CelebA dataset http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html
- https://lilianweng.github.io/lil-log/2018/08/12/from-autoencoder-to-beta-vae.html#sparse-autoencoder
- Weight initialization https://arxiv.org/abs/1502.01852
- Vector-Quantized Auto-Encoders https://arxiv.org/abs/1711.00937