New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Empty response when the finish reason is MAX_TOKENS #280
Comments
Thank you for reporting this issue. I am able to replicate this error. While setting |
Facing a similar issue and unclear on whether or not this is by-design, but I'm getting an empty 'text' response when finishReason is MAX_TOKENS. I'd expect at least a partial response, if not (ideally) a response that is actually within the token threshold assigned. Getting nothing at all is confusing. Can somebody clarify if this is intended behavior? |
Same as above. |
1 similar comment
Same as above. |
I ran into this too and it was very hard to track down. The exception was very misleading. |
"The |
This is fixed. |
Description of the bug:
When using max_output_tokens in generate_content to limit the output of the model, the following error is thrown when trying to access response.text, whereas its expected to get the output till MAX_TOKENS.
ValueError: The
response.text
quick accessor only works when the response contains a validPart
, but none was returned. Check thecandidate.safety_ratings
to see if the response was blocked.The actual response:
This is the code snippet, taken from https://ai.google.dev/tutorials/python_quickstart#generation_configuration, which shows expected behaviour is to get upto 20 tokens then cut off due to MAX_TOKENS.
Actual vs expected behavior:
https://ai.google.dev/tutorials/python_quickstart#generation_configuration
Similar output as of here, not error when trying to access response.text, but to get the output of approx 20 tokens.
Works in AI Studio
Any other information you'd like to share?
Package information
Name: google-generativeai
Version: 0.5.0
Streaming partially works, but the last chunk with MAX_TOKENS is still empty (Could be intended behaviour)
The text was updated successfully, but these errors were encountered: