Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Attention_mask(calls My in paper) is not used as input of discriminator #30

Open
laoliu97 opened this issue Nov 1, 2022 · 1 comment

Comments

@laoliu97
Copy link

laoliu97 commented Nov 1, 2022

Hello, I'm very interested in your work.
After reading the v1 paper and code, I found that there is no attention_mask(My) and composite_images (calls Gy in paper)are put into D together, but only the composite_images Gy(Gy=RyMy+x(1-My)) were put into D.
Have you deleted this part of the code?
The following figure is the relevant formula in the paper, but it is not shows in the code.
image

@laoliu97
Copy link
Author

laoliu97 commented Nov 1, 2022

I found that the code only had D_B and D_A,But there is no D_ YA and D_ YB in it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant