Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error Traps #198

Open
ArMaTeC opened this issue Apr 17, 2024 · 1 comment
Open

Error Traps #198

ArMaTeC opened this issue Apr 17, 2024 · 1 comment

Comments

@ArMaTeC
Copy link

ArMaTeC commented Apr 17, 2024

Hi all,

Is there any way to add a error trap for the LLM response

Here is 2 examples

Sometimes when I talk to the assistant I get this spam:
[D][voice_assistant:619]: Response: "Sorry, I had a problem talking to OpenAI: Error code: 400 - {'error': {'message': "An assistant message with 'tool_calls' must be followed by tool messages responding to each 'tool_call_id'. The following tool_call_ids did not have response messages: call_0Ju96bO4XKU8w1ylwRykk5nj", 'type': 'invalid_request_error', 'param': 'messages.[9].role', 'code': None}}"

but the most important and wanted one would be a way to retrigger the assistant when it asks
[D][voice_assistant:619]: Response: "Is there anything else I can help you with"
[D][voice_assistant:619]: Response: "Are you certain you wish to turn off all lights"
[D][voice_assistant:619]: Response: "Is this correct"
and so on

@jleinenbach
Copy link

This is indeed quite annoying if ChatGPT guesses entity_ids etc. Then the function breaks and you get this. Then you need to close the chat and start over.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants