Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Missing the positional encodings in the encoder #10

Open
dailenson opened this issue Mar 18, 2022 · 0 comments
Open

Missing the positional encodings in the encoder #10

dailenson opened this issue Mar 18, 2022 · 0 comments

Comments

@dailenson
Copy link

dailenson commented Mar 18, 2022

Hi, sir, thanks for the impressive work. Mentioned in the paper, "To retain information regarding the order of input sequences being supplied, we add the positional encodings [23] to the input of each attention layer". However, the released code does not add the positional encodings to the Multi-Head Attention of the encoder, and only adds positional encodings to the Multi-Head Attention of the decoder. It's better if we don't apply positional encodings in the encoder?

@dailenson dailenson changed the title Missing the positional encodings Missing the positional encodings in the encoder Mar 18, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant