Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Eval without Tuning/Using OPT-1.3B #34

Open
ChaoGaoUCR opened this issue Aug 1, 2023 · 2 comments
Open

Eval without Tuning/Using OPT-1.3B #34

ChaoGaoUCR opened this issue Aug 1, 2023 · 2 comments

Comments

@ChaoGaoUCR
Copy link

Dear Author,

Thanks for your great projects.
I was trying to evaluate the model without Tuning and with Tuning. I wondered if we can evaluate the model with the original model.
Also, if I want to use models except LLAMA Bloom and GPT-J, do I have to write my own part?

Thanks

@HZQ950419
Copy link
Collaborator

Hi,

Yes, you can evaluate the original models by commenting Line 222-227 in evaluate.py. And if the model you wanna use has already been supported, you can just indicate the argument --base_model. If not, then you need to indicate the argument --target_modules or add the mapping of the model to target modules in LLM-Adapters/peft/src/peft/mapping.py.

If you have any questions on add unsupported models to the code base, please let us know and we will help with it!

Thanks!

@ChaoGaoUCR
Copy link
Author

Thank you so much!
I will try this out😃

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants