Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can you explain bit about input module of the dynamic memory network(DMN+) mentioned in here. #11

Open
shamanez opened this issue Jun 22, 2017 · 0 comments

Comments

@shamanez
Copy link

shamanez commented Jun 22, 2017

In the input module how to embed the information that coming from the context? In some papers it has mentioned

The will concatenate all the words in the context and add an EOS at the end of each sentence and feed in through r RNN with GRU units. Then take the hidden states at each time step.
Ask Me Anything:Dynamic Memory Networks for Natural Language Processing

In cases where the input sequence is a list of sentences,
we concatenate the sentences into a long list of word
tokens, inserting after each sentence an end-of-sentence token.

I went through the code it's bit different here. What is actually happening ? Each sentence separately feed through a RNN ?

@shamanez shamanez changed the title Can you explain bit about input module of the dynamic memory network mentioned in here. Can you explain bit about input module of the dynamic memory network(DMN+) mentioned in here. Jun 22, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant