Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

about the Automatic evaluation #15

Open
bingfeiz opened this issue Dec 20, 2021 · 4 comments
Open

about the Automatic evaluation #15

bingfeiz opened this issue Dec 20, 2021 · 4 comments

Comments

@bingfeiz
Copy link

First of all, thank you very much for your help. I have encountered a problem and hope you can answer it. I used myself to implement the automatic evaluation indicators, but the results are quite different from those in the paper. Can you please disclose the implementation code of your evaluation indicators?

@lizekang
Copy link
Collaborator

Hi, we use the evaluation scripts from (https://github.com/microsoft/DialoGPT). What is the problem?

@bingfeiz
Copy link
Author

First of all thank you for your answer. The method in the code is a bit confusing, you are using the automatic evaluation under which folder, dstc or pycocoevalcap?

@lizekang
Copy link
Collaborator

It's in dstc/. You can refer to (https://github.com/mgalley/DSTC7-End-to-End-Conversation-Modeling) for more details.

@bingfeiz
Copy link
Author

Thank you very much for your answer

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants