You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Firstly, THANK YOU for making this repo! It is fantastic ⭐ ⭐ ⭐ ⭐ ⭐ ⭐ ⭐ ⭐ ⭐ ⭐
This is more of a suggestion/wish than a issue.
Would it be possible to extent the section in the readme of how one does to give the model training (human) attention?
Today it looks like this, it is quite hard to understand what to do!
Attention
This framework also allows for you to add an efficient form of self-attention to the designated layers of the discriminator (and the symmetric layer of the generator), which will greatly improve results. The more attention you can afford, the better!
add self attention after the output of layer 1
$ stylegan2_pytorch --data ./data --attn-layers 1
add self attention after the output of layers 1 and 2
do not put a space after the comma in the list!
$ stylegan2_pytorch --data ./data --attn-layers [1,2]
The text was updated successfully, but these errors were encountered:
Firstly, THANK YOU for making this repo! It is fantastic ⭐ ⭐ ⭐ ⭐ ⭐ ⭐ ⭐ ⭐ ⭐ ⭐
This is more of a suggestion/wish than a issue.
Would it be possible to extent the section in the readme of how one does to give the model training (human) attention?
Today it looks like this, it is quite hard to understand what to do!
The text was updated successfully, but these errors were encountered: