Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BadRequest: 400 POST https://generativelanguage.googleapis.com/v1beta/models/gemini-pro:generateContent?%24alt=json%3Benum-encoding%3Dint: Only one candidate can be specified #3743

Open
Snigdha8 opened this issue May 3, 2024 · 0 comments
Labels
api: vertex-ai Issues related to the googleapis/python-aiplatform API.

Comments

@Snigdha8
Copy link

Snigdha8 commented May 3, 2024

I want to obtain all the possible candidates from the response of Gemini 1.0.
In the code, when candidate_count = 1, it returns some response with 1 candidate. But when candidate_count > 1, then it returns the below error -

WARNING:tornado.access:400 POST /v1beta/models/gemini-pro:generateContent?%24alt=json%3Benum-encoding%3Dint (127.0.0.1) 795.11ms

BadRequest Traceback (most recent call last)
in <cell line: 2>()
1 question_1 = "What is Cucumber?"
----> 2 response = get_response(question_1)
3 print("\n\nGemini response \n", response)

8 frames
/usr/local/lib/python3.10/dist-packages/google/ai/generativelanguage_v1beta/services/generative_service/transports/rest.py in call(self, request, retry, timeout, metadata)
844 # subclass.
845 if response.status_code >= 400:
--> 846 raise core_exceptions.from_http_response(response)
847
848 # Return the response

BadRequest: 400 POST https://generativelanguage.googleapis.com/v1beta/models/gemini-pro:generateContent?%24alt=json%3Benum-encoding%3Dint: Only one candidate can be specified

Below is my code -

`
import google.generativeai as genai
import os
import PIL.Image

def get_response(question):

safety_settings = [
    {
        "category": "HARM_CATEGORY_DANGEROUS",
        "threshold": "BLOCK_NONE",
    },
    {
        "category": "HARM_CATEGORY_HARASSMENT",
        "threshold": "BLOCK_NONE",
    },
    {
        "category": "HARM_CATEGORY_HATE_SPEECH",
        "threshold": "BLOCK_NONE",
    },
    {
        "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
        "threshold": "BLOCK_NONE",
    },
    {
        "category": "HARM_CATEGORY_DANGEROUS_CONTENT",
        "threshold": "BLOCK_NONE",
    },
]

config = genai.GenerationConfig(
  candidate_count=1,
  temperature=0.1
)

genai.configure(api_key=API_KEY)
model = genai.GenerativeModel('gemini-pro', safety_settings=safety_settings, generation_config=config)
response = model.generate_content(question)

candidates = response.candidates  # Get all candidates
print("\n")
print("Length of candidates -> ", len(candidates))
print("All candidates -> ", candidates)
print("\n")
candidates = response.candidates  # Get all candidates
all_answers = [candidate.content for candidate in candidates]
print("All answers -> ", all_answers)
return response.text

question_1 = "What is Cucumber?"
response = get_response(question_1)
print("\n\nGemini response \n", response)
`

In general I have noticed for all the responses from Gemini, by default only 1 response is returned.

I am following the below documentation -
https://ai.google.dev/api/python/google/generativeai/GenerationConfig
https://ai.google.dev/api/rest/v1/GenerateContentResponse

Can someone please tell why candidate_count is not accepting values greater than 1, when the documentation mentions it as a configurable parameter?
If there is another way of getting all responses from LLM, please share.

@product-auto-label product-auto-label bot added the api: vertex-ai Issues related to the googleapis/python-aiplatform API. label May 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
api: vertex-ai Issues related to the googleapis/python-aiplatform API.
Projects
None yet
Development

No branches or pull requests

1 participant