Skip to content

Latest commit

 

History

History
5 lines (3 loc) · 1.12 KB

Improving Variational Inference with Inverse Autoregressive Flow.md

File metadata and controls

5 lines (3 loc) · 1.12 KB

An interesting paper. It combines two successful approaches in deep generative models recently: normalizing flow and autoregressive models. While the idea of normalizing flow is powerful, the authors claimed they are not scalable (but why?). Their intuition is that Gaussian autoregressive functions can scale well to high-dimensional latent space. The idea of using autogressive models seems a very good idea (e.g. see MADE, PixelCNN). The authors show that such functions can also turn into invertible non-linear transformations of the input. It therefore makes sense to use inverse Gaussian autoregressive functions as new flow. They show that their model achieve SOTA. A direct comparision between normalizing flow and inverse autoregressive flow is not performed, though!

It is quite surprising (at least to me) that applying many steps of conditional gaussian distributions can model arbitrary complex distribution. Why is that the case? I was wondering because in Normalizing Flows are designed to get rid of simple Gaussian distributions.