Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about Reconstructing Data and Interpreting Latent Space #86

Open
armarion opened this issue Feb 16, 2024 · 1 comment
Open

Comments

@armarion
Copy link

This is a general question about reconstructing data after training the VAE, that is, passing the original data into the encoder and decoder...

In the code examples of these VAEs, the reconstruction is performed by passing the original data into the full network forward pass. This pass includes adding noise to the data via the reparameterization trick before the decoder step. But shouldn't we evaluate reconstruction quality without this noise? For example, should we instead just take the means from the decoder, without any introduced noise?

And related to this question, when assessing the distributional qualities of the latent space, should we be looking at the encoded latent states without this noise (again just looking at the vector of mu)?

@ranabanik
Copy link

I am not sure reparameterization inserts are referred as noise. Without the reparameterization it will be just an encoder-decoder, not variational autoencoders.
Reparameterization prevents the network from overfitting the inputs and make the backpropagation trainable with universal approximation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants