Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

response.text could check the finish_reason and safety filters and give a more directly helpful error message. #282

Open
MarkDaoust opened this issue Apr 11, 2024 · 2 comments
Labels
component:python sdk Issue/PR related to Python SDK status:triaged Issue/PR triaged to the corresponding sub-team type:feature request New feature request/enhancement

Comments

@MarkDaoust
Copy link
Collaborator

MarkDaoust commented Apr 11, 2024

Description of the feature request:

response.text could check the safety filters and give a more directly helpful error message.

If a message is blocked because of safety, currently you get an index error or something, but if the finish_reason is bad we could give some details on why text is not filled in.

@MarkDaoust MarkDaoust added type:feature request New feature request/enhancement component:python sdk Issue/PR related to Python SDK labels Apr 11, 2024
@MarkDaoust MarkDaoust changed the title response.text could check the safety filters and give a more directly helpful error message. response.text could check the finish_reason and safety filters and give a more directly helpful error message. Apr 12, 2024
@singhniraj08 singhniraj08 added the status:triaged Issue/PR triaged to the corresponding sub-team label Apr 15, 2024
@NickEfthymiou
Copy link

I would like to point out one use case where this feature request is critical. The first message received from the model is sometimes (frequently enough to be noticeable):

{
  "candidates": [
    {
      "finishReason": "RECITATION",
      "index": 0
    }
  ]
}

That’s all the bytes the client has to work with.

Assume a chat client for example. The client has issued one message, which has a system instruction "You are a helpful assistant" and an initial prompt "Introduce yourself with a friendly greeting". This normally has the effect: the model responds with "Hello, I am Bard (usually adds a waving hand emoji)" and a few more lines about the training the model has received. Plus, it generally concludes with "How can I help you today?"

The chat application then takes this response and populates the intro screen, which gets the chat conversation going. Except, if the first message back is what occasionally happens, a RECITATION block, there’s nothing to populate the intro screen with!

I have a workaround, I set up a default intro message (hardcoded in the client). If the first response is valid, the actual response overwrites the hardcoded string and that’s what the intro screen shows. If the first response is blocked, then nothing overwrites the hardcoded intro message and that’s what is shown.

It doesn’t seem right that client applications would have to do such contortions. If a suitable message were provided by whatever tier is responsible for the RECITATION block, which is totally beyond the control of anything the client can do, it would certainly help.

@zakcali
Copy link

zakcali commented May 3, 2024

additional note: after getting RECITATION error, chat sessions ends with nodejs api. Because last element of history contains empty parts as follows:

{ parts: [], role: 'model' }

and if you want to continue to chat, api always gives that error:

GoogleGenerativeAIError: [400 Bad Request] * GenerateContentRequest.contents[3].parts: contents.parts must not be empty

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component:python sdk Issue/PR related to Python SDK status:triaged Issue/PR triaged to the corresponding sub-team type:feature request New feature request/enhancement
Projects
None yet
Development

No branches or pull requests

4 participants