Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Attention type #165

Open
ratis86 opened this issue Sep 18, 2018 · 8 comments
Open

Attention type #165

ratis86 opened this issue Sep 18, 2018 · 8 comments

Comments

@ratis86
Copy link

ratis86 commented Sep 18, 2018

Can somebody tell me what is the type of attention used in this lib? Because I checked against Bahdanau and Luong attentions and it doesn't look like neither or maybe I'm missing something !

@ratis86
Copy link
Author

ratis86 commented Sep 18, 2018

actually after double checking it, it looks like its the dot attention of Luong. Is there a reason that use the dot attention and not the general one?

@pskrunner14
Copy link

@ratis86 thanks for pointing this out. There's no particular reason that I'm aware of. You can contact the respective contributor for that. However we're gonna be implementing the general as well as copy attention mechanisms in the coming versions.

@rrkarim
Copy link

rrkarim commented Oct 6, 2018

@pskrunner14 And also on this. Whom should I contact?

@pskrunner14
Copy link

@CoderINusE you're welcome to submit a PR.

@rrkarim
Copy link

rrkarim commented Oct 7, 2018

@pskrunner14 should I pass an additional argument to the attention.forward method or It will be more clear if I create separate classes for different attention models and keep single base class?

@pskrunner14
Copy link

@CoderINusE please see copy branch. This feature is partially implemented. Just need to iron out a few bugs and write tests.

@lmatz
Copy link

lmatz commented Oct 31, 2018

I am not sure whether the comment section in current Attention Module is a bit off? "output=tanh(w∗(attn∗context)+b∗output)" does not match with the code or the 5th equation in the paper https://arxiv.org/pdf/1508.04025.pdf unless b is also interpreted as a matrix? Thanks

@woaksths
Copy link

I think there is a difference between math written in comments and code.
The main difference is that math do linear layer with (attncontext) and concat it with output whereas written codes do concat (attncontext) and output first and after that they do projection linear layer. I am confused that order. Please tell me why there is a gap.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants