Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replace the locally fine tuned model #261

Open
chen-li1314 opened this issue May 16, 2024 · 4 comments
Open

Replace the locally fine tuned model #261

chen-li1314 opened this issue May 16, 2024 · 4 comments
Assignees

Comments

@chen-li1314
Copy link

When I tried rag related experiments, I encountered some issues when replacing llama3 with my locally fine tuned large model, Ollama_ Types ResponseError: model 'llama3' not found, try pulling it first, but I have already changed the name,
Def get_rag_sssistant(
Llm_model: str="llama3: install",
Embeddings model: str="nic embedded text",
User_id: Optional [str]=None,
Run_id: Optional [str]=None,
Debug_mode: bool=True,
)->Assistant:
I don't understand why this is happening
708f6c2515082971d9f9c1d8749548b
When I changed it to the default llama3, it still reported an error. I want to know how to call olama in the source code and if it is possible to change its default location. I feel that the problem lies here

@jacobweiss2305
Copy link
Contributor

@chen-li1314 here is how to set an ollama model with an assistant. Assuming you pull and run the model using Ollama cli.
https://docs.phidata.com/llms/ollama

Let me know if you tried this.

Also we are on discord which has a Ollama experts in the channel who can help:
https://discord.com/invite/4MtYHHrgA8

@jacobweiss2305 jacobweiss2305 self-assigned this May 16, 2024
@jacobweiss2305
Copy link
Contributor

Try this ollama pull llama3

@chen-li1314
Copy link
Author

Try this ollama pull llama3

Thank you for your reply. I have found that Ollama pull llama3 can solve this problem, but strangely, I don't even need to change the name of the model: llama: install (the name of the local large model I previously modified). So how should I use the locally fine tuned model? Since setting the name doesn't work

@chen-li1314
Copy link
Author

@chen-li1314 here is how to set an ollama model with an assistant. Assuming you pull and run the model using Ollama cli. https://docs.phidata.com/llms/ollama

Let me know if you tried this.

Also we are on discord which has a Ollama experts in the channel who can help: https://discord.com/invite/4MtYHHrgA8

I will have a try

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants