New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature]: DAY 1 SUPPORT - Gemini 1.5 #1982
Comments
Added the new renamed gemini pro models. Looks like gemini-1.5 is in private preview (model names not released). Will update this ticket once they're out. |
Model strings added to https://github.com/BerriAI/litellm/blob/main/model_prices_and_context_window.json You should now be able to call this. Vertex AIvia SDK import litellm
litellm.vertex_project = "hardy-device-38811" # Your Project ID
litellm.vertex_location = "us-central1" # proj location
response = litellm.completion(model="gemini-1.5-pro", messages=[{"role": "user", "content": "write code for saying hi from LiteLLM"}]) via Proxy Server litellm_settings:
vertex_project: "hardy-device-38811" # Your Project ID
vertex_location: "us-central1" # proj location
model_list:
- model_name: team1-gemini-pro
litellm_params:
model: gemini-1.5-pro Google AI Studiovia SDK import litellm
litellm.vertex_project = "hardy-device-38811" # Your Project ID
litellm.vertex_location = "us-central1" # proj location
response = litellm.completion(model="google/gemini-1.5-pro", messages=[{"role": "user", "content": "write code for saying hi from LiteLLM"}]) via Proxy Server litellm_settings:
vertex_project: "hardy-device-38811" # Your Project ID
vertex_location: "us-central1" # proj location
model_list:
- model_name: team1-gemini-pro
litellm_params:
model: gemini/gemini-1.5-pro |
Hi i tried to run Gemini 1.5 pro but i am getting this error Here is the code import os
import litellm # Main libray for LLM's
from dotenv import load_dotenv
def init_api_keys():
load_dotenv()
gemini_api_key = os.getenv("GEMINI_API_KEY")
if not gemini_api_key:
print("GEMINI_API_KEY not found in.env file")
exit()
def get_model_message(prompt: str):
system_message = "You are an intelligent coding assistant. You can generate code effeciently. \n"
messages = [
{"role": "system", "content":system_message},
{"role": "assistant", "content": "Please generate code wrapped inside triple backticks known as codeblock."},
{"role": "user", "content": prompt}
]
return messages
def extract_content(output:str):
try:
return output['choices'][0]['message']['content']
except (KeyError, TypeError) as exception:
print(f"Error extracting content: {str(exception)}")
raise
def main():
try:
#litellm.set_verbose=True
# call the init_api_keys() function to load the API key
init_api_keys()
# set the model
model_name = "gemini/gemini-1.5-pro"
# Initialize the interpreter
prompt = input("Enter your query: ")
messages:str = get_model_message(prompt)
temperature:float = 0.1
response = litellm.completion(model_name, messages=messages,temperature=temperature)
if not response:
print("Error in generating response. Please try again.")
return
content = extract_content(response)
if not content:
print("Error in extracting content. Please try again.")
return
print(content)
except Exception as exception:
print(f"Error in main: {str(exception)}")
if __name__ == "__main__":
main() Error generating response: 404 models/gemini-1.5-pro-latest is not found for API version v1beta, or is not supported for GenerateContent. Call ListModels to see the list of available models and their supported methods. |
I have added new Feature Request as well for this (google-gemini/generative-ai-python#227) and wait it for to approve to get access via API. |
Hi @krrishdholakia The Gemini Pro 1.5 via API is still not working. Request to litellm:
litellm.completion('gemini/gemini-1.5-pro', messages=[{'role': 'system', 'content': 'You are an intelligent coding assistant. You can generate code effeciently. \n'}, {'role': 'assistant', 'content': 'Please generate code wrapped inside triple backticks known as codeblock.'}, {'role': 'user', 'content': ' Write factorial of number in C++ 20'}], temperature=0.1)
self.optional_params: {}
kwargs[caching]: False; litellm.cache: None
Final returned optional params: {'temperature': 0.1}
self.optional_params: {'temperature': 0.1}
{'model': 'gemini-1.5-pro', 'messages': [{'role': 'system', 'content': 'You are an intelligent coding assistant. You can generate code effeciently. \n'}, {'role': 'assistant', 'content': 'Please generate code wrapped inside triple backticks known as codeblock.'}, {'role': 'user', 'content': ' Write factorial of number in C++ 20'}], 'optional_params': {'temperature': 0.1}, 'litellm_params': {'acompletion': False, 'api_key': None, 'force_timeout': 600, 'logger_fn': None, 'verbose': False, 'custom_llm_provider': 'gemini', 'api_base': '', 'litellm_call_id': '5edf1598-7f74-45f5-ab8a-a9c7ba572be3', 'model_alias_map': {}, 'completion_call_id': None, 'metadata': None, 'model_info': None, 'proxy_server_request': None, 'preset_cache_key': None, 'no-log': False, 'stream_response': {}}, 'start_time': datetime.datetime(2024, 4, 10, 0, 57, 27, 658729), 'stream': False, 'user': None, 'call_type': 'completion', 'litellm_call_id': '5edf1598-7f74-45f5-ab8a-a9c7ba572be3', 'completion_start_time': None, 'temperature': 0.1, 'input': ['You are an intelligent coding assistant. You can generate code effeciently. \nPlease generate code wrapped inside triple backticks known as codeblock. Write factorial of number in C++ 20'], 'api_key': '', 'additional_args': {'complete_input_dict': {'inference_params': {'temperature': 0.1}}}, 'log_event_type': 'pre_api_call'}
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
Logging Details: logger_fn - None | callable(logger_fn) - False
Logging Details LiteLLM-Failure Call
self.failure_callback: []
Error in main: 404 models/gemini-1.5-pro is not found for API version v1beta, or is not supported for GenerateContent. Call ListModels to see the list of available models and their supported methods. LiteLLM: Current Version = 1.34.38 |
Gemini 1.5 Pro is now available through the API https://developers.googleblog.com/2024/04/gemini-15-pro-in-public-preview-with-new-features.html?m=1 |
Hey @haseeb-heaven just checked on google ai studio - it doesn't look like it's out yet via api. Let me know if your portal shows you any different.
^ this error looks like it's being raised by google |
We already have gemini 1.5 support on vertex ai - litellm/model_prices_and_context_window.json Line 973 in dbbf605
and people using it there. would recommend trying to see if you have access on there. |
I have tried with Google generative AI python sdk and Gemini 1.5 Pro is working fine there |
The Feature
https://blog.google/technology/ai/google-gemini-next-generation-model-february-2024/#build-experiment
Motivation, pitch
.
Twitter / LinkedIn details
No response
The text was updated successfully, but these errors were encountered: