Skip to content

luis-armando-perez-rey/diffusion_vae

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Diffusion Variational Autoencoder (∆VAE)

A standard Variational Autoencoder, with a Euclidean latent space, is structurally incapable of capturing topological properties of certain datasets. To remove the topological obstructions, we introduce the Diffusion Variational Autoencoders (∆VAE) with arbitrary(closed) manifolds as a latent space.

Implementation of Diffusion Variational Autoencoders

This repository contains the code for the Diffusion Variational Autoencoders paper [1]. It includes the necessary code for embedding image datasets into the hyperspherical space , the Clifford Torus , the torus embedded in 3-dimensional Euclidean space , the orthogonal group in 3-dimensions O(3) , the special orthogonal group in 3-dimensions SO(3), the d-dimensional real projective space , and the standard Variational Autoencoder with d-dimensional Euclidean space [2]

Dependencies

  • python>=3.6
  • tensorflow>=1.7
  • imageio
  • scipy

UPDATE:

Branch tf2_migration can be run with tensorflow 1.15 which has TF1 compatible behavior and features from TF2.

Notebook Examples

  • binary_MNIST: This notebook shows an example on how to train a ∆VAE with all possible manifolds using multi-layer perceptrons for the encoder and decoder network. alt text

Results after training

After training any of the example notebooks, the outcomes will be saved in the results folder within the notebooks directory. The following folders will be created:

  • images: contains the images of the embedded data in latent space and its reconstructions if the plotting function is implemented for the given manifold.
  • tensorboard: contains the log files monitoring the relevant components of the loss, metrics of interest and the computation graph.
  • weights_folder: contains the trained weights that are saved once training is finished.
  • parameters: contains the json files with the parameter values used for that given experiment. They include the encoder, decoder and diffusion variational autoencoders parameters.

Contact

For any questions regarding the code and the paper refer to Luis Armando Pérez Rey

Citation

[1] Perez Rey, L.A., Menkovski, V., Portegies, J.W. (2020). Diffusion Variational Autoencoders. Twenty-Ninth International Joint Conference on Artificial Intelligence.

BibTeX

@inproceedings{ijcai2020-375,
  title     = {Diffusion Variational Autoencoders},
  author    = {Perez Rey, Luis A. and Menkovski, Vlado and Portegies, Jim},
  booktitle = {Proceedings of the Twenty-Ninth International Joint Conference on
               Artificial Intelligence, {IJCAI-20}},
  publisher = {International Joint Conferences on Artificial Intelligence Organization},
  editor    = {Christian Bessiere},	
  pages     = {2704--2710},
  year      = {2020},
  month     = {7},
  note      = {Main track}
  doi       = {10.24963/ijcai.2020/375},
  url       = {https://doi.org/10.24963/ijcai.2020/375},
}

References

[2] Kingma, D.P. and Welling, M. (2014) Auto-Encoding Variational Bayes. In International Conference on Learning Representations (ICLR).

License

Apache License 2.0

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published