Skip to content
This repository has been archived by the owner on Aug 18, 2021. It is now read-only.

Error in BahdanauAttnDecoderRNN #133

Open
hityzy1122 opened this issue Jul 19, 2019 · 1 comment
Open

Error in BahdanauAttnDecoderRNN #133

hityzy1122 opened this issue Jul 19, 2019 · 1 comment

Comments

@hityzy1122
Copy link

In class BahdanauAttnDecoderRNN(nn.Module), self.gru = nn.GRU(hidden_size, hidden_size, n_layers, dropout=dropout_p),
but the input of gru is rnn_input = torch.cat((word_embedded, context), 2) whose size is 2*hidden_size

@Michi-123
Copy link

nn.GRU has parameters bellow, hasn't it?

input_size – The number of expected features in the input x
hidden_size – The number of features in the hidden state h
num_layers – Number of recurrent layers. E.g., setting num_layers=2 would mean stacking two GRUs together to form a stacked GRU, with the second GRU taking in outputs of the first GRU and computing the final results. Default: 1
bias – If False, then the layer does not use bias weights b_ih and b_hh. Default: True
batch_first – If True, then the input and output tensors are provided as (batch, seq, feature). Default: False
dropout – If non-zero, introduces a Dropout layer on the outputs of each GRU layer except the last layer, with dropout probability equal to dropout. Default: 0
bidirectional – If True, becomes a bidirectional GRU. Default: False

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants