-
Notifications
You must be signed in to change notification settings - Fork 87
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problems I came across when I try to reprocude the results #37
Comments
Hi, If you want to reproduce all the results in the table, you can just train and evaluate with the given command. For example, to train LLaMA-7b-LoRA, you can use For evaluation on SVAMP as example: If you have any questions, please let use know! |
Hi @HZQ950419 Thanks for your great work, I also have a problem when I try to evaluate the fine-tuned model with lora. I find the main reason is that the output of response is none, for example:
|
Same problem. |
Dear Authors,
Thanks for these great projects and your kind help.
I try to reproduce all the results in the Table,
But I came across several issues, Could you please explain some possible problems?
1. When I tried to Tune the model, I found the function "generate_prompt" in both finetune.py and evaluation.py can't extract data from the JSON file which tile is not "input, instruction, output, answer", So I changed the JSON file all the input name, I was wondering whether I am doing the right Jobs. Here are two examples I used.
Original One in Github Repo:
The One I modified:
2.
I can't get an answer which even close to the right Label Since I wasn't working in the ML area before, all the metrics are new to me, But the tuned model gives some results which sound ridiculous to me, I wondered if I did something wrong, or is there any other new Metrics I should use to reproduce the Tune-model Score in The GitHub repo?
I attached some results I got from my LoRA-Tuned model:
BTW, When I switch the test datasets to train datasets, the accuracy get higher, but still not the same as Table list.
I wondered if you can share your tuning setting if possible.
The text was updated successfully, but these errors were encountered: