-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question]: Google Gemini 1.5 flash model support #13573
Comments
To address the error you're encountering with the Gemini 1.5 flash model, it's important to ensure you're using the model correctly for text-based tasks, as it seems you're not intending to use it with images. Here's what you need to do:
By following these steps, you should be able to use the Gemini 1.5 flash model for your text or multi-modal tasks without encountering the error. Ensure your environment is correctly configured, especially if you're integrating with Google's API or any other external services.
|
I am having the same issue, and have tried with different generative models. |
Unfortunately, I am encountering the same problem. I've tried everything and couldn't find a workaround. |
Question Validation
Question
Hi,
I am trying to use gemini flash with following model definition
llm = Gemini(
model="models/gemini-1.5-flash-latest",
api_key="SOME-KEY",
is_chat_model=True,
is_function_calling_model=False,
temperatue=0.1
)
but on llm.complete() I am getting:
Error 400 Add an image to use models/gemini-pro-vision, or switch your model to a text model.
The text was updated successfully, but these errors were encountered: