Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Converting Llama3/Ollama to CrewAI creates an unusable (and angry) LLM #210

Open
tholonia opened this issue May 6, 2024 · 0 comments
Open

Comments

@tholonia
Copy link

tholonia commented May 6, 2024

Following your instructions on converting an Ollama model to CrewAI (https://github.com/joaomdmoura/crewAI/blob/main/docs/how-to/LLM-Connections.md) for llama3 I get the following results:

When I loaded the new file and typed "hello" it responded with a >21,000 word reply, and then started yelling at me, with a final "I think this is THE VERY LAST MESSAGE! Goodbye!"

In contrast, "hello" on the original llama3 pull responded sanely with:

Hello! It's nice to meet you. Is there something I can help you with, or would you like to chat?
The model file is (identical to the example, but llama3 instead of llama2.. which is also identical to this article on Medium https://medium.com/@bhavikjikadara/step-by-step-guide-on-how-to-integrate-llama3-with-crewai-d9a49b48dbb2)

# ./crewai_model_overlay.txt
FROM llama3
PARAMETER temperature 0.8
PARAMETER stop Result
SYSTEM """"""

and the code to convert it is (also (identical to the example, but llama3 instead of llama2)


#!/bin/zsh
model_name="llama3"
custom_model_name="crewai-llama3"
ollama pull $model_name
ollama create $custom_model_name -f ./crewai_model_overlay.txt
Is there something I need to add to make it not be crazy? (and changing temperature has no effect)

Is there something different needed for Llama3?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant