Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🐛 fix: Prevent Node Server Crash Due to Unhandled ChatCompletionMessage Error #1278

Merged
merged 7 commits into from
Dec 5, 2023

Conversation

danny-avila
Copy link
Owner

@danny-avila danny-avila commented Dec 5, 2023

Summary

I have implemented a temporary workaround to address a bug in the openai Node library where the server crashes due to an unhandled ChatCompletionMessage error when working with reverse proxies or APIs mimicking the OpenAI spec. This bug fix ensures that the server remains stable until the openai library resolves the underlying issue.

Closes #1270

FYI I seem to have issues with ollama independent of LibreChat when I don't include --drop_params, related issue: BerriAI/litellm#992 (comment)

With this change I am also prevent title generation if a request was cancelled.

Relevant openai bug: openai/openai-node#553

Change Type

  • Bug fix (non-breaking change which fixes an issue)

Testing

To test this workaround:

  1. Set up a reverse proxy service or an alternate baseURL mimicking the OpenAI spec.
  2. Integrate the provided modified stream handling code into the application.
  3. Run the application and observe that the server no longer crashes when a ChatCompletionMessage with role=assistant is not received.

Further testing should be done using different reverse proxy setups to ensure the workaround is robust across various scenarios.

Test Configuration:

  • OS: Linux 5.10.16.3-microsoft-standard-WSL2 x86_64 x86_64
  • Node version: v18.13.0
  • Library version: openai v4.20.1

Checklist

  • My code adheres to this project's style guidelines
  • I have performed a self-review of my own code
  • I have commented on any complex areas of my code
  • My changes do not introduce new warnings
  • I have written tests demonstrating that my changes are effective or that my feature works
  • Local unit tests pass with my changes

@danny-avila danny-avila merged commit f1bc711 into main Dec 5, 2023
2 checks passed
@danny-avila danny-avila deleted the ollama-fix branch December 5, 2023 03:58
shortpoet pushed a commit to shortpoet/LibreChat that referenced this pull request Dec 30, 2023
…ge Error (danny-avila#1278)

* refactor(addTitle): avoid generating title when a request was aborted

* chore: bump openai to latest

* fix: catch OpenAIError Uncaught error as last resort

* fix: handle final messages excludes role=assistant

* Update OpenAIClient.js

* chore: fix linting errors
cnkang pushed a commit to cnkang/LibreChat that referenced this pull request Feb 6, 2024
…ge Error (danny-avila#1278)

* refactor(addTitle): avoid generating title when a request was aborted

* chore: bump openai to latest

* fix: catch OpenAIError Uncaught error as last resort

* fix: handle final messages excludes role=assistant

* Update OpenAIClient.js

* chore: fix linting errors
jinzishuai pushed a commit to aitok-ai/LibreChat that referenced this pull request May 20, 2024
…ge Error (danny-avila#1278)

* refactor(addTitle): avoid generating title when a request was aborted

* chore: bump openai to latest

* fix: catch OpenAIError Uncaught error as last resort

* fix: handle final messages excludes role=assistant

* Update OpenAIClient.js

* chore: fix linting errors
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Bug]: Running LibreChat against LiteLLM backed by Ollama
1 participant