Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automate the 'precheck' validation step using semantic similarity scores #354

Open
harshit-sh opened this issue May 16, 2024 · 1 comment

Comments

@harshit-sh
Copy link

This task involves automating the current 'precheck' stage which currently involves a human 'triage-er' to validate whether the student model already knows the information which a user is trying to teach the model.

Similar to the steps used in a standard RAG workflow, the sentences could be converted to vectors using embeddings and then could be compared using metrics like cosine similarity scores. Based on these scores, the 'precheck' stage can either be marked as a 'success' (✅ ) or a 'failure' (❎ ).

This can be included with the precheck call to the @instructlab-bot GH bot.

References:

  1. https://huggingface.co/tasks/sentence-similarity
  2. https://www.sbert.net/docs/usage/semantic_textual_similarity.html
@Gregory-Pereira
Copy link
Collaborator

Let me know if I am off the mark here but this seems similar to the work im doing in #356, its just that the suggested implementation seems different. This issue seems to suggest using a RAG / vector + embeding for a similarity score vs my implementation which a 1 shot model evaluation natural language prompt (with the teacher model being used for precheck and the trained model used for evaluation). cc @mingxzhao, thoughts on this evaluation method?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants