Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

about the global and local features in fig 3 #35

Open
sanwei111 opened this issue Jun 22, 2021 · 2 comments
Open

about the global and local features in fig 3 #35

sanwei111 opened this issue Jun 22, 2021 · 2 comments

Comments

@sanwei111
Copy link

as we know,the conventional attention module can capture features like fig 3.b(including diagonal and other positions). THIS ability is its nature,BUT i JUST wonder that when we add a branch that can capture local features,the attention module can not capture feature like before,i.g,(including diagonal and other positions),while it just capture global feature!!!

@wwwadx
Copy link

wwwadx commented May 24, 2022

Same question, how the model make sure that the attention layers capture the global information and the CNN layers capture local information with only one NLL loss? Have you figure it out?

as we know,the conventional attention module can capture features like fig 3.b(including diagonal and other positions). THIS ability is its nature,BUT i JUST wonder that when we add a branch that can capture local features,the attention module can not capture feature like before,i.g,(including diagonal and other positions),while it just capture global feature!!!

@sanwei111
Copy link
Author

sanwei111 commented May 24, 2022 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants