Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

trained faces are all blury and seems not learnt #18

Open
ecilay opened this issue May 31, 2018 · 5 comments
Open

trained faces are all blury and seems not learnt #18

ecilay opened this issue May 31, 2018 · 5 comments

Comments

@ecilay
Copy link

ecilay commented May 31, 2018

I implemented a version in pytorch, with the same architecture illustrated in your paper and code, without orthogonal regularization and MDC though.
However, my generated faces are 300k iteration are still very blury, like below. Do you have any idea why this might happen? thanks very much!!
rec_step_300000

@ajbrock
Copy link
Owner

ajbrock commented May 31, 2018

There's plenty that can go wrong. Just based on these images it looks like even the VAE half of the network isn't working. I'd recommend starting by training DCGAN in PyTorch and tweaking that implementation, rather than rolling your own--there are too many details you have to get right, and even with modern updates getting just one thing wrong can break everything. You might want to consider employing modern updates (particularly spectral norm) since they do help make things more robust.

Also note that you're training on a close-crop dataset; I recommend using the wider crops for more pleasing images.

@ecilay
Copy link
Author

ecilay commented May 31, 2018

thanks for the prompt reply! I actually trained from tweaking a VAE/GAN model, by combining encoder and discriminator into one model as described in your paper. And two loss optimizers, as shown in your code....the VAE/GAN trained fine...but my IAN train is problematic above...i will look again the code...thanks!

@ajbrock
Copy link
Owner

ajbrock commented May 31, 2018

Are you making sure to not propagate reconstruction gradients to the discriminator? I've always kept the "encoder" as a small MLP (or even a single dense layer) that operates on one of the last layers of the discriminator, but doesn't propagate gradients back to it.

@ecilay
Copy link
Author

ecilay commented May 31, 2018

yea the loss for discriminator_encoder = bce_real + bce_reconstruction + bce_sampled_noise (bce = binary cross entropy).

I have one model for 1)discriminator_encoder; one model for 2)decoder, as a normal decoder in dcgan. The above loss is for 1)discriminator_encoder.
The pseudo code for discriminator_encoder class is below:

class discriminator_encoder:
  def init():   
    self.features = ... a vector of 64*4*8*8
    self.lth_features = ... a vector of 1000
    self.output = ... a scaler
    self.mean = ...a vector of 1000
    self.logvar = ... a vector of 1000

  def forward():
    return lth_features, output, mean, logvar

then with loss defined above and

opt_discriminator_encoder = optim.Adam(discriminator_encoder.parameters())

Does this look right to you? thanks!!

@ecilay
Copy link
Author

ecilay commented Jun 1, 2018

Hi you mentioned for encoder,

doesn't propagate gradients back to it.

then how you train encoder? not together with discriminator? thanks!!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants