You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Confirm this is a Node library issue and not an underlying OpenAI API issue
This is an issue with the Node library
Describe the bug
Similar to #526 except there's no real way to handle this except within process.on('uncaughtException', (err) => {}), a hacky workaround I fenagled, or using the old API vs. openai.beta.
I understand that this may be exclusive to reverse proxies or other such APIs mimicking OpenAI spec, and perhaps missing a critical spec, but the error should still land where expected to be caught.
OpenAIError: stream ended without producing a ChatCompletionMessage with role=assistant
at ChatCompletionStream._AbstractChatCompletionRunner_getFinalMessage (/app/node_modules/openai/lib/AbstractChatCompletionRunner.js:464:11)
at ChatCompletionStream._AbstractChatCompletionRunner_getFinalContent (/app/node_modules/openai/lib/AbstractChatCompletionRunner.js:455:134)
at ChatCompletionStream._emitFinal (/app/node_modules/openai/lib/AbstractChatCompletionRunner.js:282:152)
at /app/node_modules/openai/lib/AbstractChatCompletionRunner.js:77:22
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
To Reproduce
Use a reverse proxy service, using an alternate baseURL
I was able to reproduce by using ollama in conjunction with litellm, passing the server url as baseURL
Streaming works and I can add breakpoints to chunks and are indeed generation partials
Error after finalMessage listener, is uncaught and will crash node server unless prevented as shown
I noticed the emit: end is expecting the last message of stream.messages to be an assistant message, so my hack prevents the issue by pushing an artificial assistant message with the real tokens generated
Code snippets
// Here's how I'm handling streamstry{conststream=awaitopenai.beta.chat.completions.stream({
...modelOptions,stream: true,}).on('error',(err)=>{/* Expect error here */}).on('finalMessage',(message)=>{/* role === 'user' here, causing the uncaught error */});forawait(constchunkofstream){consttoken=chunk.choices[0]?.delta?.content||'';}}catch(err){/* If not above, expect error here */}// My hacky workaroundtry{letintermediateReply='';conststream=awaitopenai.beta.chat.completions.stream({
...modelOptions,stream: true,}).on('finalMessage',(message)=>{if(message?.role!=='assistant'){stream.messages.push({role: 'assistant',content: intermediateReply});}});forawait(constchunkofstream){consttoken=chunk.choices[0]?.delta?.content||'';intermediateReply+=token;}}catch(err){//}
OS
Linux 5.10.16.3-microsoft-standard-WSL2 x86_64 x86_64
Node version
v18.13.0
Library version
openai v4.20.1
The text was updated successfully, but these errors were encountered:
Confirm this is a Node library issue and not an underlying OpenAI API issue
Describe the bug
Similar to #526 except there's no real way to handle this except within
process.on('uncaughtException', (err) => {})
, a hacky workaround I fenagled, or using the old API vs.openai.beta
.I understand that this may be exclusive to reverse proxies or other such APIs mimicking OpenAI spec, and perhaps missing a critical spec, but the error should still land where expected to be caught.
To Reproduce
finalMessage
listener, is uncaught and will crash node server unless prevented as shownemit: end
is expecting the last message of stream.messages to be an assistant message, so my hack prevents the issue by pushing an artificial assistant message with the real tokens generatedCode snippets
OS
Linux 5.10.16.3-microsoft-standard-WSL2 x86_64 x86_64
Node version
v18.13.0
Library version
openai v4.20.1
The text was updated successfully, but these errors were encountered: