Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No context history #211

Open
blejdacik opened this issue May 6, 2024 · 5 comments
Open

No context history #211

blejdacik opened this issue May 6, 2024 · 5 comments

Comments

@blejdacik
Copy link

Hi, I use HA as a voice assistant. I noticed that it doesn't remember my previous messages. I checked the settings but couldn't find it.
Is there a way to set it so that the conversation history is attached to the prompt?
I use an open wake word. Every time I say the wake word, it creates a new chat as if without previous information. This is bad when I want to communicate with it fully.
Thank you.

@jleinenbach
Copy link

I also employ this setup and cannot replicate the issue with the details you've provided. When I ask ChatGPT a question and then refer back to it, ChatGPT can recall the information, so it seems to work correctly in this instance.

@lyrics1988123
Copy link

I do have the same issue when using my raspberry pi with local wake word detection and wyoming protocol.
If i use the chat i can stay in the chatroom. But also there, if you close the chatwindow and open it again, it will create a new chat.

how can we achieve a history?

@jleinenbach
Copy link

... But also there, if you close the chatwindow and open it again, it will create a new chat.

IMHO this is the expected behavior.

@Rafaille
Copy link

It is the expected behaviour. The dev behind wyoming satellite has promised he would eventually implement continuous conversation through voice assist but nothing so far.

@mkammes
Copy link

mkammes commented May 16, 2024

@Rafaille , so I am completely clear:
As of today, every single Voice Assist activation using Extended OpenAI is a new chat with no history?

I'm not finding that to be the case. If I hide an odd fact in my prompt template, and rotate it's values out, often ChatGPT via Extended Open AI will repeat erroneous information that was contained in a previous Voice Assist activation.

I've found the only way to get around this is to activate the Voice Assistant several times and ask it questions to "cycle through" the "old" prompt template in ChatGPT's history, so the new prompt is the only one used (using the "Clear all Messages" dropdown under "Content Truncation Strategy" in the config). I believe that's what the "Context Threshold" Config option is within the Extended OpenAI Conversation...e..g how many characters (?) it takes to clear out of the old prompt template.

I'm currently using gpt-3.5-turbo-1106.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants