Skip to content
This repository has been archived by the owner on Aug 18, 2021. It is now read-only.

Question about Luong Attention Implementation #130

Open
kyquang97 opened this issue May 12, 2019 · 7 comments
Open

Question about Luong Attention Implementation #130

kyquang97 opened this issue May 12, 2019 · 7 comments

Comments

@kyquang97
Copy link

Hi @spro, i've read your implementation of luong attention in pytorch seq2seq translation tutorial and in the context calculation step, you're using rnn_output as input when calculating attn_weights but i think we should hidden at current decoder timestep instead. Please check it and can you provide explaination about it if i'm wrong
image

@beebrain
Copy link

@kyquang97 Luong takes the last context vector and concatenates them with the last output vector as an input to RNN. The output from RNN will be passed to Attention layer to calculate the context vector in the current time step. The current context vector combines with the current output from RNN will be calculated to the current output for this time step. Please noted that the current context vector will be passed to the next time step.

@Coderx7
Copy link

Coderx7 commented Oct 25, 2019

@beebrain Please correct me if I'm wrong, but you are using the Lstm layer, instead of the lstm cell, so when each forward pass happens its a different sample not a different timestep on a single sample. You have no control over each timesteps separately here. what you get out of the RNN in this configuration is just a translation/sequence that has already gone through all timesteps!

@syorami
Copy link

syorami commented Dec 1, 2019

@kyquang97 Luong takes the last context vector and concatenates them with the last output vector as an input to RNN. The output from RNN will be passed to Attention layer to calculate the context vector in the current time step. The current context vector combines with the current output from RNN will be calculated to the current output for this time step. Please noted that the current context vector will be passed to the next time step.

I think he just meant that in this implementation, the rnn_output is fed into the attention layer instead of the current decoder hidden state, which is inconsistent with the description in the original paper.

@beebrain
Copy link

beebrain commented Dec 1, 2019

@kyquang97 Luong takes the last context vector and concatenates them with the last output vector as an input to RNN. The output from RNN will be passed to Attention layer to calculate the context vector in the current time step. The current context vector combines with the current output from RNN will be calculated to the current output for this time step. Please noted that the current context vector will be passed to the next time step.

I think he just meant that in this implementation, the rnn_output is fed into the attention layer instead of the current decoder hidden state, which is inconsistent with the description in the original paper.

I think the rnn_output and hidden output of self.gru had the same value. You can use hidden or rnn_output.

@syorami
Copy link

syorami commented Dec 4, 2019

@kyquang97 Luong takes the last context vector and concatenates them with the last output vector as an input to RNN. The output from RNN will be passed to Attention layer to calculate the context vector in the current time step. The current context vector combines with the current output from RNN will be calculated to the current output for this time step. Please noted that the current context vector will be passed to the next time step.

I think he just meant that in this implementation, the rnn_output is fed into the attention layer instead of the current decoder hidden state, which is inconsistent with the description in the original paper.

I think the rnn_output and hidden output of self.gru had the same value. You can use hidden or rnn_output.

You do remind me! I'm also confused by the usage of outputs and hidden states in some attention implementations at first and they do actually share the same values. BTW, what about the LSTM? From Pytorch doc, the LSTM outputs hidden states as well as cell states. Are cell states used in attention or can I just consider using outputs and last hidden states equally?

@beebrain
Copy link

beebrain commented Dec 4, 2019

@kyquang97 Luong takes the last context vector and concatenates them with the last output vector as an input to RNN. The output from RNN will be passed to Attention layer to calculate the context vector in the current time step. The current context vector combines with the current output from RNN will be calculated to the current output for this time step. Please noted that the current context vector will be passed to the next time step.

I think he just meant that in this implementation, the rnn_output is fed into the attention layer instead of the current decoder hidden state, which is inconsistent with the description in the original paper.

I think the rnn_output and hidden output of self.gru had the same value. You can use hidden or rnn_output.

You do remind me! I'm also confused by the usage of outputs and hidden states in some attention implementations at first and they do actually share the same values. BTW, what about the LSTM? From Pytorch doc, the LSTM outputs hidden states as well as cell states. Are cell states used in attention or can I just consider using outputs and last hidden states equally?

In my opinion, You can use the hidden state output like GRU.

@richardsun-voyager
Copy link

I am also confused about why we can calculate all the attention scores for the source sentence using the previous hidden state and current input embedding.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants