-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clarification on post-processing generated result #16
Comments
Hi, sorry for the late response. We use multi-reference dailydialog dataset
If you have any questions, please feel free to ask. |
I got the following results on the DailyDialog dataset with the default settings. I fine-tuned the pre-trained Dialoflow models and used a beam size of 5 to generate the output followed by the NLTK tokenization step.
There is a little gap between the reported results, especially for BLEU, NIST, and Meteor. Could you please help me figure out the source of this discrepancy. |
Hi. Kudos for this nice work. I am trying to reproduce the results on DailyDialog dataset. It will be very helpful if you can clarify the following details.
In Issue #13, you mentioned using "nltk.word_tokenize() to tokenize the sentence and then concatenate the tokens" to make the format of the generated dialogue same as the reference response. I have two questions here,
It will be very useful if you can briefly mention your post-processing steps.
The text was updated successfully, but these errors were encountered: