You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
Hi there :)
I have not seen any other issue or pr on this
I prefer not to use openai api, i dont trust my data with them.
I think if we can get mixtral or llama3-70b (or future 400b) to work with OpenInterpreter
it will be a much needed speed improvement and cost reduction.
Describe the solution you'd like
I would like to be able to use Groq's API instead of OpenAI's
They offer the best-in-the-world inference speeds
They are not yet affiliated with any deep corp (that i am aware of)
plus, their api is free, atleast for now
Describe alternatives you've considered
Honestly, since it is not mentioned at all (not in the readme, not in the docs, issues, or prs)
i thought this wasn't already supported, but reading litellm docs i can see that they do support groq already
Additional context
I have already played with the official groq python api quite a bit, using it in other advanced pipelines, and it seems it will fit well with oi,
but when using --model groq/mixtral-8x7b-32768
i am getting errors:
I have created a new branch groq to try to fix this, bypassing litellm and using the official api directly
I've managed to hook it well into the existing oi flow - no errors :)
but also not getting any code executed...
its doing everything right, but seems to halucinate the result intead of actually running it the code it wrote
i understand that it has to output 'execute' and 'code' but not sure in which format,
can anyone tell me what format oi is expecting ( i currently dont have openai api to test against it ) example for a working output json would be great
seems it's just a matter of making mixtral and other models work well with oi expected output
(ill do more reseach, maybe someone solves this in another issue)
Really hoping to get this to work
Thanks a lot and all the best!
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
Hi there :)
I have not seen any other issue or pr on this
I prefer not to use openai api, i dont trust my data with them.
I think if we can get mixtral or llama3-70b (or future 400b) to work with OpenInterpreter
it will be a much needed speed improvement and cost reduction.
Describe the solution you'd like
I would like to be able to use Groq's API instead of OpenAI's
They offer the best-in-the-world inference speeds
They are not yet affiliated with any deep corp (that i am aware of)
plus, their api is free, atleast for now
Describe alternatives you've considered
Honestly, since it is not mentioned at all (not in the readme, not in the docs, issues, or prs)
i thought this wasn't already supported, but reading litellm docs i can see that they do support groq already
Additional context
I have already played with the official groq python api quite a bit, using it in other advanced pipelines, and it seems it will fit well with oi,
but when using
--model groq/mixtral-8x7b-32768
i am getting errors:
I have created a new branch
groq
to try to fix this, bypassing litellm and using the official api directlyI've managed to hook it well into the existing oi flow - no errors :)
but also not getting any code executed...
its doing everything right, but seems to halucinate the result intead of actually running it the code it wrote
i understand that it has to output 'execute' and 'code' but not sure in which format,
can anyone tell me what format oi is expecting ( i currently dont have openai api to test against it )
example for a working output json would be great
seems it's just a matter of making mixtral and other models work well with oi expected output
(ill do more reseach, maybe someone solves this in another issue)
Really hoping to get this to work
Thanks a lot and all the best!
The text was updated successfully, but these errors were encountered: