Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When eval the model on amazon datasets, the result fluctuate. #3

Open
AegeanYan opened this issue Oct 17, 2022 · 1 comment
Open

When eval the model on amazon datasets, the result fluctuate. #3

AegeanYan opened this issue Oct 17, 2022 · 1 comment

Comments

@AegeanYan
Copy link

Hi @rktamplayo , thank you for your great work!
I followed the README.md to reproduce the experimental results on Amazon dataset and use the check point you released to evaluate the test.json. But I didn't get the same result as you claim in the paper and the result I got is fluctuating, which is I evaluate 5 times or more but every time I got diffrent result. I wonder that whether there is some other settings I missed.

@AegeanYan
Copy link
Author

I've solved this question, the 234 line sum_tokens[token_ids] += tokens[tindex] have potential randomness. You should change the writing style or set torch.use_deterministic_algorithms(True) to avoid this eval fluctuation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant