New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[GSK-3513] Fix RAGAS metric computation #1925
base: main
Are you sure you want to change the base?
Conversation
🔍 Existing Issues For ReviewYour pull request is modifying functions with the following pre-existing issues: 📄 File: giskard/rag/evaluate.py
Did you find this useful? React with a 👍 or 👎 |
@pierlj looks good, can you add a test on the ragas metrics to make sure they are calculated correctly? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good for me, @pierlj do you want to make a last check?
Yep, I will have a look |
Quality Gate passedIssues Measures |
According to this issue, some RAGAS metrics are not properly computed by RAGAS (this includes context recall, context precision and faithfulness).
To fix it:
retrieved_documents
argument to the evaluate methodanswer_fn
to return retrieved documents alongside the answer to a question