Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upload evaluation outputs and adapters #46

Open
mkeoliya opened this issue Nov 14, 2023 · 3 comments
Open

Upload evaluation outputs and adapters #46

mkeoliya opened this issue Nov 14, 2023 · 3 comments

Comments

@mkeoliya
Copy link

It would be great if you could provide the individual outputs when running the models on the test sets. Additionally, is it possible to provide links to all the model adapters used (currently the README only includes llama-13b).

Perhaps a GDrive or Zenodo link would work well.

This would enable quicker turn-around times when comparing different adapters. Thanks a lot for the work so far!

@HZQ950419
Copy link
Collaborator

Hi,

Thanks for your interest in our project! We have uploaded the outputs of LLaMA-7B and LLaMA-13B with different adapters on both math reasoning and commonsense reasoning tasks. You can find the outputs here, https://drive.google.com/drive/folders/1weL4Cq1h6M5lOhNL9Hran167D1dqtOZk?usp=sharing. The results are consistent with the reported ones. But we still need time to collate the adapter weights.

Please let us know if you have further questions.

@mkeoliya
Copy link
Author

Thanks a lot!

@mkeoliya mkeoliya reopened this Nov 19, 2023
@mkeoliya
Copy link
Author

@HZQ950419 can you upload the adapter weights too?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants