-
Notifications
You must be signed in to change notification settings - Fork 153
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Configurable feedback level #1105
base: main
Are you sure you want to change the base?
Conversation
The prompting still needs to be changed in concert with the normalization. Now (for llm-based feedback) we prompt the LLM to score 0-10. If we go with fewer levels, that prompting should also change to 0-4, 0-2, etc. |
Perhaps the current PR could instead refer to score range then? Normalizing by something less than 10 with current prompts will produce scores with ranges outside of 0-1 but unsure if this was intention. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Figure out phrasing around feature, is it about the normalized score range being different than [0-1] or is it about the resolution of the prompted score (but still normalized to [0-1]). The latter requires the changes Josh mentioned.
It's the latter - offering variable resolutions is the goal here. For many cases, clients have observed that LLMs are unable to properly distinguish 11 levels (0-10) of feedback and find that 3-5 levels work better. The goal of the PR is to support that workflow. |
Instead of providing rating for llm generated feedback between 0.1 to 1 in increments of .1, offer options to do it increments of 0.2 or 0.33