Skip to content

wuga214/IMPLEMENTATION_Variational-Auto-Encoder

Repository files navigation

Variational Autoencoder

This is a enhanced implementation of Variational Autoencoder. Both fully connected and convolutional encoder/decoder are built in this model. Please star if you like this implementation.

Use

$python vae_train_amine.py # for training
$python sample.py # for sampling

Update

  1. Removed standard derivation learning on Gaussian observation decoder.
  2. Set the standard derivation of observation to hyper-parameter.
  3. Add deconvolution CNN support for the Anime dataset.
  4. Remove Anime dataset itself to avoid legal issues.

Pre-Trained Models

There are two pretrained models

  1. Anime
  2. MNIST

The weights of pretrained models are locaded in weights folder

Samples

ANIME

MNIST

Latent Space Distribution

About

Simple implementation of Variational Autoencoder

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages