Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] UnicodeEncodeError during evaluation #3251

Closed
tkonao opened this issue May 14, 2024 · 5 comments
Closed

[BUG] UnicodeEncodeError during evaluation #3251

tkonao opened this issue May 14, 2024 · 5 comments
Assignees
Labels
bug Something isn't working

Comments

@tkonao
Copy link

tkonao commented May 14, 2024

Describe the bug
I create a flow to do RAG using Azure AI Search, but I got this following error during an evaluation:

SystemError: Unexpected error occurred while executing the batch run. Error: (UnicodeEncodeError) 'utf-8' codec can't encode characters in position 171-172: surrogates not allowed.

This error appeared two weeks ago and before that my evaluation using the exact same flow (just some prompt change) were working. The thing is that now the evaluation is working without any change on my side, but the same issue with a flow that do the RAG using langchain connector to Azure AI Search appear.

SystemError: Unexpected error occurred while executing the batch run. Error: (UnicodeEncodeError) 'utf-8' codec can't encode characters in position 37-38: surrogates not allowed.

The two flow were created using the RAG template of Azure AI Studio.

Running Information(please complete the following information):

  • Promptflow Package Version using pf -v: 1.10.1
  • Operating System: Windows 11
  • Python Version using python --version: 3.11.7
@tkonao tkonao added the bug Something isn't working label May 14, 2024
@brynn-code
Copy link
Contributor

Hi, which tool is raising this unicode error? Is there any details for us to reproduce the bug?

@tkonao
Copy link
Author

tkonao commented May 15, 2024

Hi, I don't now if this is a tool issue because it happens when I run an evaluation using the Azure AI Studio with as parameter:
evaluation with context -> my RAG flow, a dataset -> metrics coherence, relevance, groundedness.

The evaluation run gives me the log:

BulkTest run ... has a dependent run .. terminated unexpectedly. Mark it as canceled.

and when I check the batch run from the flow that I try to evaluate, all the line are marked as completed with all the field of output well filled but the status of the run is failed with the error that I mention previously.

To reproduce the bug you can try to create a RAG flow using the playground with a connection to an Azure AI Search index, but I see that this flow has change so the old one. After that run an evaluation on it.

As I say in my first message the bug seems to appear and disappear without any explanation, so maybe you can't reproduce it but give me some research ideas.

please not that I describe the bug when I use the Azure AI Studio but it also happen when I clone the two flow (RAG and evaluation) and try to run the evaluation using vsCode.

@brynn-code
Copy link
Contributor

@tkonao I'm not so sure what happened as the detail not enough. Given the message that the bugs seems to appear and disappear, if there's a period of time between the error appear and disappear, it might be some bugs in an old runtime version and fixed in a newer runtime version, I'm sorry that I can't give more clues based on current situation.

@brynn-code
Copy link
Contributor

If you are using automatic runtime, it will pick the newest runtime image each time when you start it.

@brynn-code brynn-code self-assigned this May 16, 2024
@tkonao
Copy link
Author

tkonao commented May 16, 2024

Hi, the problem seems solved.
I restart with an automatic runtime, so maybe the problem was here.

Thanks for your help.

@tkonao tkonao closed this as completed May 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants