Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

on('error', () => ...) event does not fire on "OpenAIError: stream ended without producing a ChatCompletionMessage with role=assistant" #553

Open
1 task done
danny-avila opened this issue Dec 5, 2023 · 0 comments
Labels
bug Something isn't working

Comments

@danny-avila
Copy link

danny-avila commented Dec 5, 2023

Confirm this is a Node library issue and not an underlying OpenAI API issue

  • This is an issue with the Node library

Describe the bug

Similar to #526 except there's no real way to handle this except within process.on('uncaughtException', (err) => {}), a hacky workaround I fenagled, or using the old API vs. openai.beta.

I understand that this may be exclusive to reverse proxies or other such APIs mimicking OpenAI spec, and perhaps missing a critical spec, but the error should still land where expected to be caught.

OpenAIError: stream ended without producing a ChatCompletionMessage with role=assistant
at ChatCompletionStream._AbstractChatCompletionRunner_getFinalMessage (/app/node_modules/openai/lib/AbstractChatCompletionRunner.js:464:11)
at ChatCompletionStream._AbstractChatCompletionRunner_getFinalContent (/app/node_modules/openai/lib/AbstractChatCompletionRunner.js:455:134)
at ChatCompletionStream._emitFinal (/app/node_modules/openai/lib/AbstractChatCompletionRunner.js:282:152)
at /app/node_modules/openai/lib/AbstractChatCompletionRunner.js:77:22
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)

To Reproduce

  • Use a reverse proxy service, using an alternate baseURL
  • Streaming works and I can add breakpoints to chunks and are indeed generation partials
  • Error after finalMessage listener, is uncaught and will crash node server unless prevented as shown
    • I noticed the emit: end is expecting the last message of stream.messages to be an assistant message, so my hack prevents the issue by pushing an artificial assistant message with the real tokens generated

Code snippets

// Here's how I'm handling streams
try {
  const stream = await openai.beta.chat.completions
    .stream({
      ...modelOptions,
      stream: true,
    })
    .on('error', (err) => {
      /* Expect error here */
    })
    .on('finalMessage', (message) => {
      /* role === 'user' here, causing the uncaught error */
    });
  
  for await (const chunk of stream) {
    const token = chunk.choices[0]?.delta?.content || '';
  }
} catch (err) {
  /* If not above, expect error here */
}

// My hacky workaround
try {
  let intermediateReply = '';
  const stream = await openai.beta.chat.completions
    .stream({
      ...modelOptions,
      stream: true,
    })
    .on('finalMessage', (message) => {
      if (message?.role !== 'assistant') {
        stream.messages.push({ role: 'assistant', content: intermediateReply });
      }
    });
  
  for await (const chunk of stream) {
    const token = chunk.choices[0]?.delta?.content || '';
    intermediateReply += token;
  }
} catch (err) {
//
}

OS

Linux 5.10.16.3-microsoft-standard-WSL2 x86_64 x86_64

Node version

v18.13.0

Library version

openai v4.20.1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant