Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature]: DAY 1 SUPPORT - Gemini 1.5 #1982

Closed
krrishdholakia opened this issue Feb 15, 2024 · 9 comments
Closed

[Feature]: DAY 1 SUPPORT - Gemini 1.5 #1982

krrishdholakia opened this issue Feb 15, 2024 · 9 comments
Labels
enhancement New feature or request

Comments

@krrishdholakia
Copy link
Contributor

The Feature

https://blog.google/technology/ai/google-gemini-next-generation-model-february-2024/#build-experiment

Motivation, pitch

.

Twitter / LinkedIn details

No response

@krrishdholakia
Copy link
Contributor Author

Added the new renamed gemini pro models.

Looks like gemini-1.5 is in private preview (model names not released). Will update this ticket once they're out.

@krrishdholakia
Copy link
Contributor Author

Model strings added to https://github.com/BerriAI/litellm/blob/main/model_prices_and_context_window.json

You should now be able to call this.

Vertex AI

via SDK

import litellm
litellm.vertex_project = "hardy-device-38811" # Your Project ID
litellm.vertex_location = "us-central1"  # proj location

response = litellm.completion(model="gemini-1.5-pro", messages=[{"role": "user", "content": "write code for saying hi from LiteLLM"}])

via Proxy Server

litellm_settings: 
  vertex_project: "hardy-device-38811" # Your Project ID
  vertex_location: "us-central1" # proj location

model_list: 
  - model_name: team1-gemini-pro
     litellm_params: 
       model: gemini-1.5-pro

Google AI Studio

via SDK

import litellm
litellm.vertex_project = "hardy-device-38811" # Your Project ID
litellm.vertex_location = "us-central1"  # proj location

response = litellm.completion(model="google/gemini-1.5-pro", messages=[{"role": "user", "content": "write code for saying hi from LiteLLM"}])

via Proxy Server

litellm_settings: 
  vertex_project: "hardy-device-38811" # Your Project ID
  vertex_location: "us-central1" # proj location

model_list: 
  - model_name: team1-gemini-pro
     litellm_params: 
       model: gemini/gemini-1.5-pro

@haseeb-heaven
Copy link
Contributor

Hi i tried to run Gemini 1.5 pro but i am getting this error

Here is the code

import os
import litellm # Main libray for LLM's
from dotenv import load_dotenv

def init_api_keys():
    load_dotenv()
    gemini_api_key = os.getenv("GEMINI_API_KEY")
    if not gemini_api_key:
        print("GEMINI_API_KEY not found in.env file")
        exit()

def get_model_message(prompt: str):
    system_message = "You are an intelligent coding assistant. You can generate code effeciently. \n"
    messages = [
        {"role": "system", "content":system_message},
        {"role": "assistant", "content": "Please generate code wrapped inside triple backticks known as codeblock."},
        {"role": "user", "content": prompt}
    ]
    return messages

def extract_content(output:str):
    try:
        return output['choices'][0]['message']['content']
    except (KeyError, TypeError) as exception:
        print(f"Error extracting content: {str(exception)}")
        raise

def main():
    try:
        #litellm.set_verbose=True
        # call the init_api_keys() function to load the API key
        init_api_keys()
        
        # set the model
        model_name = "gemini/gemini-1.5-pro"
        # Initialize the interpreter
        prompt = input("Enter your query: ")
        
        messages:str = get_model_message(prompt)
        temperature:float = 0.1
        
        response = litellm.completion(model_name, messages=messages,temperature=temperature)
        if not response:
            print("Error in generating response. Please try again.")
            return
        
        content = extract_content(response)
        
        if not content:
            print("Error in extracting content. Please try again.")
            return
        
        print(content)
    except Exception as exception:
        print(f"Error in main: {str(exception)}")
        
if __name__ == "__main__":
    main()
Error generating response: 404 models/gemini-1.5-pro-latest is not found for API version v1beta, or is not supported for GenerateContent. Call ListModels to see the list of available models and their supported methods.

@haseeb-heaven
Copy link
Contributor

I have added new Feature Request as well for this (google-gemini/generative-ai-python#227) and wait it for to approve to get access via API.

@haseeb-heaven
Copy link
Contributor

Hi @krrishdholakia The Gemini Pro 1.5 via API is still not working.

Request to litellm:
litellm.completion('gemini/gemini-1.5-pro', messages=[{'role': 'system', 'content': 'You are an intelligent coding assistant. You can generate code effeciently. \n'}, {'role': 'assistant', 'content': 'Please generate code wrapped inside triple backticks known as codeblock.'}, {'role': 'user', 'content': ' Write factorial of number in C++ 20'}], temperature=0.1)


self.optional_params: {}
kwargs[caching]: False; litellm.cache: None
Final returned optional params: {'temperature': 0.1}
self.optional_params: {'temperature': 0.1}
{'model': 'gemini-1.5-pro', 'messages': [{'role': 'system', 'content': 'You are an intelligent coding assistant. You can generate code effeciently. \n'}, {'role': 'assistant', 'content': 'Please generate code wrapped inside triple backticks known as codeblock.'}, {'role': 'user', 'content': ' Write factorial of number in C++ 20'}], 'optional_params': {'temperature': 0.1}, 'litellm_params': {'acompletion': False, 'api_key': None, 'force_timeout': 600, 'logger_fn': None, 'verbose': False, 'custom_llm_provider': 'gemini', 'api_base': '', 'litellm_call_id': '5edf1598-7f74-45f5-ab8a-a9c7ba572be3', 'model_alias_map': {}, 'completion_call_id': None, 'metadata': None, 'model_info': None, 'proxy_server_request': None, 'preset_cache_key': None, 'no-log': False, 'stream_response': {}}, 'start_time': datetime.datetime(2024, 4, 10, 0, 57, 27, 658729), 'stream': False, 'user': None, 'call_type': 'completion', 'litellm_call_id': '5edf1598-7f74-45f5-ab8a-a9c7ba572be3', 'completion_start_time': None, 'temperature': 0.1, 'input': ['You are an intelligent coding assistant. You can generate code effeciently. \nPlease generate code wrapped inside triple backticks known as codeblock. Write factorial of number in C++ 20'], 'api_key': '', 'additional_args': {'complete_input_dict': {'inference_params': {'temperature': 0.1}}}, 'log_event_type': 'pre_api_call'}


Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.

Logging Details: logger_fn - None | callable(logger_fn) - False
Logging Details LiteLLM-Failure Call
self.failure_callback: []
Error in main: 404 models/gemini-1.5-pro is not found for API version v1beta, or is not supported for GenerateContent. Call ListModels to see the list of available models and their supported methods.
LiteLLM: Current Version = 1.34.38

@haseeb-heaven
Copy link
Contributor

@krrishdholakia
Copy link
Contributor Author

Hey @haseeb-heaven just checked on google ai studio - it doesn't look like it's out yet via api. Let me know if your portal shows you any different.
Screenshot 2024-04-09 at 2 52 21 PM

Error in main: 404 models/gemini-1.5-pro is not found for API version v1beta, or is not supported for GenerateContent. Call ListModels to see the list of available models and their supported methods.

^ this error looks like it's being raised by google

@krrishdholakia
Copy link
Contributor Author

We already have gemini 1.5 support on vertex ai -

"gemini-1.5-pro": {

and people using it there. would recommend trying to see if you have access on there.

@haseeb-heaven
Copy link
Contributor

Hey @haseeb-heaven just checked on google ai studio - it doesn't look like it's out yet via api. Let me know if your portal shows you any different. Screenshot 2024-04-09 at 2 52 21 PM

Error in main: 404 models/gemini-1.5-pro is not found for API version v1beta, or is not supported for GenerateContent. Call ListModels to see the list of available models and their supported methods.

^ this error looks like it's being raised by google

I have tried with Google generative AI python sdk and Gemini 1.5 Pro is working fine there

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants