New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: system prompt for playground page #591
Conversation
79e3803
to
686756c
Compare
Signed-off-by: Philippe Martin <phmartin@redhat.com>
686756c
to
ecd1e9d
Compare
Signed-off-by: Philippe Martin <phmartin@redhat.com>
@@ -87,7 +87,7 @@ export class PlaygroundV2Manager extends Publisher<PlaygroundV2[]> implements Di | |||
* @param userInput the user input | |||
* @param options the model configuration | |||
*/ | |||
async submit(playgroundId: string, userInput: string, options?: ModelOptions): Promise<void> { | |||
async submit(playgroundId: string, userInput: string, systemPrompt: string, options?: ModelOptions): Promise<void> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The systemPrompt is not something we can/should change once the conversation is started ?
The conversation history should not be changed, as it would lead to incoherence if we are able to edit it while we already have existing messages.
Looking at how works ChatGPT or else, it is always defined at the begining of the conversation. In the same way that the model id is defined at the conversation creation.
The systemprompt should be provided in the creation request and added to the conversation, or it should have its own endpoint to change it, and once the conversation is started you cannot edit it. (disabling text area)
We could imagine
setConversationSystemPrompt(conversationId: string, content: string): void {
const conversation = this.conversations.get(conversationId);
//ensure exists
if(conversation .length > 1) throw new Error('cannot change on started conversation');
// set system prompt value
}
```
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems to me that it is not what @slemeur defined in the spec (where system prompt is one of the settings):
How does it work?
- The settings are applied to the NEXT call to the model to be made and will be sent in the payload made to the model.
- That means that, each time we do a call to the model, we read the state of the settings, get the values and append that in the API call.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO The quoted statement does not reflect how a system prompt in LLM conversation works @slemeur
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The goal is to let the user being able to iterate with the different "settings" on how the prompt is getting configured.
- Say the user is trying with one set of settings.
- Get the answer and want to see if adjusting the settings will make the answer any better.
Hence the need to apply the settings only to the next prompt sent to the model - so the user can keep tuning;
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there is a misunderstanding: changing a settings should only apply to the next messages, the system prompt is something that the model is receiving at the start of the conversation. It is not a settings
If you change the system prompt, you are changing the first message, (system prompt is simply a message, with a role system
.)
In chatGPT when you change a message in a conversation, all the messages bellow are deleted
chagtp.mp4
You cannot expect in an existing covnersation to be able to change the system prompt, as the system prompt is the first message of the conversation. Otherwise you should delete all the messages already sent.
I am not saying that a chat bot is deterministic, but at least coherent, if you have an existing conversation, and you change the system prompt, then you re-sent a message, the existing conversation is based an a message that is not here anymore. It does not make sense.
What I am explaining is how all chat gpt like works, here is the explanation from ChatGPT website
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can;t make it work I have timeout errors even without the system prompt |
OK, I'll look at this timeout problem |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
81e08f2
to
2d6e1a5
Compare
I think you can forget about it as it seems that was related to the health of my Podman machine. Restarting it made it work again |
Signed-off-by: Philippe Martin <phmartin@redhat.com>
3098934
to
1c41030
Compare
I also get a timeout sometimes (and @axel7083 also). I looked at the durations of the different calls, and the completions.create gets some time (generally 2s on my system, but up to 45s during my tests). The latest commit on this PR should fix the problem |
What does this PR do?
Screenshot / video of UI
playground-system-prompt.mp4
What issues does this PR fix or reference?
Part of #525
Fixes #543
How to test this PR?
Set instructions in the system prompt textarea, and check that it is used for the following prompts (NOTE: for the video above, it happened frequently that the model didn't take in consideration the system prompt for the second and following prompts)