-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Intermittent "ValueError: Failed to convert output to JSON:" with AzureOpenAI and RouterQueryEngine #13562
Comments
The intermittent
To address these issues, review your data handling and serialization logic, ensure that all objects are serializable, and handle concurrency carefully to avoid race conditions. Additionally, validate the format of the output before attempting to serialize it to JSON.
|
As you can see, the LLM did not output a choice for the router query engine. (it seems to think there is no valid choice) The option here would be either wrapping your query in a try/except, or possibly letting that response return to the user (although that might be confusing for users too?) |
yeah the problem is that for instance if you want to do an evaluator it will just error out. I guess there's probably a workaround? |
Bug Description
It intermittently fails to produce an output and errors out. So in one case, the result is the output printed to terminal:
But the previous run I got:
Prior to that:
Prior to that:
So it fails intermittently which makes running eval sets impossible.
Version
0.10.34
Steps to Reproduce
llama-index==0.10.34,llama-index-llms-azure-openai==0.1.8, Python 3.12.3 on archlinux, virtualenv in
~/.virtualenvs/math/
Got the exact code from this page. The only modifications were as follows:
The code block with the definition of the llm was edited to read
and the final code block was edited to change the await to an asyncio.run():
Relevant Logs/Tracbacks
The text was updated successfully, but these errors were encountered: