Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why did not use Conv2DTranspose rather than Conv2D in the generator #24

Open
lestel opened this issue Mar 8, 2018 · 1 comment
Open

Comments

@lestel
Copy link

lestel commented Mar 8, 2018

the related paper all say they use Conv2DTranspose

@NagabhushanSN95
Copy link

Yes. That is one of their key contributions.
They say to

  1. use strided convolutions instead of max-pooling layer
  2. use cond2dTranspose instead of upsampling
  3. use batch normalization layers
  4. use relu activations for intermediate layers

@jacobgil You've implemented none of those. So, this is not DC GAN right?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants