Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Better readme for how to give Attention #257

Open
brainmaniac opened this issue Oct 5, 2021 · 0 comments
Open

Better readme for how to give Attention #257

brainmaniac opened this issue Oct 5, 2021 · 0 comments

Comments

@brainmaniac
Copy link

Firstly, THANK YOU for making this repo! It is fantastic ⭐ ⭐ ⭐ ⭐ ⭐ ⭐ ⭐ ⭐ ⭐ ⭐

This is more of a suggestion/wish than a issue.

Would it be possible to extent the section in the readme of how one does to give the model training (human) attention?

Today it looks like this, it is quite hard to understand what to do!

Attention
This framework also allows for you to add an efficient form of self-attention to the designated layers of the discriminator (and the symmetric layer of the generator), which will greatly improve results. The more attention you can afford, the better!

add self attention after the output of layer 1
$ stylegan2_pytorch --data ./data --attn-layers 1
add self attention after the output of layers 1 and 2
do not put a space after the comma in the list!
$ stylegan2_pytorch --data ./data --attn-layers [1,2]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant