Skip to content

Latest commit

 

History

History
10 lines (10 loc) · 1.66 KB

ML_VAE.md

File metadata and controls

10 lines (10 loc) · 1.66 KB

ML - Variational Autoencoders (VAE)

Paper Conference Remarks
Learning Structured Output Representation using Deep Conditional Generative Models NIPS 2015 1. Develop a scalable deep conditional generative model for structured output variables using Gaussian latent variables. 2. Provide novel strategies to build a robust structured prediction algorithms, such as recurrent prediction network architecture, input noise-injection and multi-scale prediction training methods.
Tutorial on Variational Autoencoders Arxiv 2016 1. Variational Autoencoders (VAEs) have emerged as one of the most popular approaches to unsupervised learning of complicated distributions. 2. VAEs have already shown promise in generating many kinds of complicated data, including handwritten digits, faces, house numbers, CIFAR images, physical models of scenes, segmentation, and predicting the future from static images. 3. Introduces the intuitions behind VAEs, explains the mathematics behind them, and describes some empirical behavior.
Adversarially Regularized Autoencoders ICML 2018 1. Propose a flexible method for training deep latent variable models of discrete structures. 2. Extend WVAE to model discrete sequences, and then further explore different learned priors targeting a controllable representation. 3. The proposed model allows us to generate natural textual outputs as well as perform manipulations in the latent space to induce change in the output space.
Back to index