Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Monitor train/val losses and hyper param optimisation #10

Open
sahasuman opened this issue May 19, 2017 · 0 comments
Open

Monitor train/val losses and hyper param optimisation #10

sahasuman opened this issue May 19, 2017 · 0 comments

Comments

@sahasuman
Copy link

Hi,

How to monitor whether the training is going well? I am a newbie in VAE+GAN training. So far, I worked with CNNs training and usually the training loss gradually deceases. For VAE+GAN, how do you monitor the training and validation losses? Do you consider the training loss as the combined loss coming from the 3 terms of the loss function (Eq. 8 in the paper)? The training loss increases initially, is it usual with VAE+GAN? How many epoch is required to converge the model?

What approach do you take to optimize the following hyper-parameters:
recon_vs_gan_weight, real_vs_gen_weight, self.equilibrium, and self.margin (in model/aegan.py) parameters? Could you please give some hints on weighing the losses (3 different loss terms) carefully to make it converge on my dataset.

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant