Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Restricting attention weights to domain #43

Open
MichaelHopwood opened this issue Apr 30, 2020 · 0 comments
Open

Restricting attention weights to domain #43

MichaelHopwood opened this issue Apr 30, 2020 · 0 comments

Comments

@MichaelHopwood
Copy link

MichaelHopwood commented Apr 30, 2020

In my application, the attention weights are centering on locations which are indicative of a subset of the classes. Therefore, while the algorithm performs well on this subset, it sometimes misclassifies on the other classes because the attention weights cause the obvious differences to be considered "residual".

Is there a documented way of restricting the attention weights to a certain value or index domain to enforce constraints on its focus? This question makes me think of NLP problems where frameworks commonly pair ML methodologies with a set of predetermined rules (usually defined with spacy).

Any thoughts? Thanks in advance.

@MichaelHopwood MichaelHopwood changed the title Restricting accuracy weights to domain Restricting attention weights to domain Apr 30, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant