Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix issue 80 #86

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open

fix issue 80 #86

wants to merge 2 commits into from

Conversation

fengwang
Copy link

removing img_A from the combined model to fix issue 80

@fengwang
Copy link
Author

Hey, thanks very much for your contribution firstly!
I have some questions that:
In the sentence:
"fake_A = self.generator(img_A)"
"valid = self.discriminator([fake_A, img_A])"
"self.combined = Model(inputs=img_A, outputs=[valid, fake_A])"

based on what i have understood, you let img_A replace img_B, this change is only to let img_A take the Input palce or give the model an input shape?
beacause in the sentence:
"g_loss = self.combined.train_on_batch(imgs_B, [valid, imgs_A])"
imgs_B still actually as input data, that is to say, imgs_B is inputted into the generator and the discriminator, am I right?

Yes. That is what is intended.

The second question is that:
"g_loss = self.combined.train_on_batch(imgs_B, [valid, imgs_A])"
in this sentence,
WHY feed the imgs_A as output?

imgs_A are the ground truth images. They are expected to be the outputs.

"imgs_A" is the output of generator? but the output of generator has been feed into
discriminator.

imgs_A are one of the outputs of the combined model, i.e., [generator+descriminator]

Based what i have understood, the network of combined has 2 inputs that are imgs_B of
the generator, and imgs_B of discriminator. And there is only 1 outputs that is "valid".
WHY does the network combined have two output?

The combined model, as the name hints, is composed of the generator and the descriminator. The outputs of the generator are expected to match imgs_A, and the outputs of the descriminator are expected to match the valid. That is why there are two outputs for the combined model.

When the generator is being trained, the loss is from imgs_A? or from the first layer of
discriminator?

The generator will never be trained alone. It will be trained when the combined model is being trained.
When the combined model is being trained, only the weights of the generator will be updated, the weights of the descriminator will remain the same.
When the combined model is being trained, the loss are from imgs_A and from back-propagated errors of valid through the desciminator model.

Thanks for your answer in advance!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

pix2pix: what is img_A for in combined model?
3 participants