Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions about interact.py #124

Open
GlitchBox opened this issue Sep 15, 2022 · 0 comments
Open

Questions about interact.py #124

GlitchBox opened this issue Sep 15, 2022 · 0 comments

Comments

@GlitchBox
Copy link

GlitchBox commented Sep 15, 2022

Hi,
Can anyone explain why the code snippet (interact.py) (underlined in red) was necessary?

Screenshot from 2022-09-15 11-58-25

As far as I know the logits returned by OpenAIGPTLMHeadModel is of the following form : (batch_size, sequence_length, vocabulary size).

Why was only the last token in the output sequence considered as the predicted next token?
Moreover, why do we have to iteratively generate an output text when the model itself returns a full sequence and not just a single token?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant