Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Interrupt kalliope during a neuron #426

Open
ghost opened this issue Mar 28, 2018 · 15 comments
Open

Interrupt kalliope during a neuron #426

ghost opened this issue Mar 28, 2018 · 15 comments

Comments

@ghost
Copy link

ghost commented Mar 28, 2018

Hi,
I need help to know how can I interrupt kalliope during a working neuron. For example: When kalliope search something on wikipedia sometimes come a big texts and I don't want to hear all text.

What can I do?

@LaMonF
Copy link
Member

LaMonF commented Mar 29, 2018

Hi @LePudim ,
You can not (yet?) interrupt Kalliope while it is talking...
But to avoid such situation with the Wikipedia Neuron you can use the "sentences" parameters which returns a given number of sentences from wikipedia.

eg:

- name: "wikipedia-search"
  signals:
    - order: "look on wikipedia {{ query }}"
  neurons:
    - wikipedia_searcher:
        language: "en"
        sentences: 5

@ghost
Copy link
Author

ghost commented Mar 29, 2018

Oh thanks @LaMonF .
I will do that!

@Sispheor
Copy link
Member

Could be a nice feature. I don't know how to implement it, but it would be nice.

The problem is, if we keep Kalliope listening for a new order while she's is speaking, we risk to catch false positives.

@ghost
Copy link
Author

ghost commented Mar 30, 2018

This feature could work through the stop command, that is, kalliope would only respond to a command during speech (all others were scorned / did not work)

@Sispheor
Copy link
Member

Sispheor commented Apr 4, 2018

@LaMonF do we keep this opened?
The fact also is that we will need to keep snowboy alive to handle this feature. And, as you know, this last is very greedy in resources.

@LaMonF
Copy link
Member

LaMonF commented Apr 4, 2018

It is not the first time someone from the community asks for this feature.
It might be worth to give it a try ...
But as @Sispheor mentionned, from our previous experiences it seems complicate to make it work properly without false positives and delays.
I think we can keep it open then and evaluate/benchmark.

@bacardi55
Copy link
Contributor

I do believe it is a great feature to have…
Maybe we could add a settings to enable this or not and based on that kill the snowboy process or keep it alives… I know it might bring a lot of code adaptation (or not, i don't know enough this part of kalliope's code) but a way to stop at any time will be great
(i'm using a physical button for this atm, but that's a hacky way :P)

@corus87
Copy link
Contributor

corus87 commented May 13, 2018

How about using a second snowboy trigger, which is more or less independent from the kalliope core (maybe with the help of the REST API?). There we could use an unique trigger, to stop the current work and start the the on_waiting_for_trigger hook again.

@Sispheor
Copy link
Member

The snowboy process is killed after matching a hotword because it cost too much resource on a RPI. We cannot keep it alive.
And, in most of case, the speaker is close of the mic. Kalliope could stop herself by answering something.
Even Google home and Alexa seems to not propose yet this feature.

With some code refactoring we could have a neuron that stop the TTS process. And then the user could use it through the signal of his choice ( Button, mqtt, snowboy (not yet coded)).

@corus87
Copy link
Contributor

corus87 commented Sep 6, 2018

I'm not sure that the snowboy process is taking so much resources on PI.
In the docs of snowboy we can see that snowboy runs with under 10% light-weight and embedded: it runs on Raspberry Pi’s and consumes less than 10% CPU on the weakest Pi’s (single-core 700M Hz ARMv6) .

Also there was an issue where snowboy has taken 50% of cpu ussage which did't seem not to be normal according to a user, even 25% on pi 1 is to high.
And in the past we had a CPU load of 120% what was much to high, maybe our issue was somewhere else.

Maybe we can let the TTS process running in the background like @Sispheor do it with your Ambient_sound neuron, where we can stop it every time. I don't think we need to let anything else running except of the TTS process this way we can always interrupt the TTS.

You can interrupt Alexa with her wake word and also mycroft has a skill to Interrupt the speaking.

Btw I think we should update snowboy to the latest version, because there are now some universal models, like Alexa or Jarvis but for that we need the option ApplyFrontend .

@Sispheor
Copy link
Member

Sispheor commented Sep 6, 2018

You cannot keep the STT (you wrote TTS but I suppose you are talking about the STT) process alive because this one need a start and a stop to know when it needs to analyse what you said.
Snowboy analyse a small range permanently.
Also, it's not only Snowboy that consume some resources but the whole project. All components together are consuming a little bit and so we had to kill the snowboy process between each order.

I made once a POC with snowboy active in background and it produce a lot of false positives, mostly if you have the microphone next to the speackers.

@corus87
Copy link
Contributor

corus87 commented Sep 6, 2018

Im talking about the TTS, that we can let it run in the background this way when Kalliope is speaking we can call the trigger again and maybe with a core neuron stop/kill the TTS from speaking. And to avoid that two instance of the TTSs are starting, we just stop the first one and start with the new.

Also I don't have any problems with false positives, my microphone is only a few centimeters away of my television and speaker and even Im watching a movie with surround sound, its pretty rare that it activate, I think that's just a matter of adjustment of the snowboy model, like playing with the sensitivity etc.

@Sispheor
Copy link
Member

Sispheor commented Sep 6, 2018

Yes the TTS could be placed in a thread. But this is not this issue here.
Give a try if you want, on your RPI, just keep the snowboy process alive and compare resource consumption.

@corus87
Copy link
Contributor

corus87 commented Sep 6, 2018

I think its exactly the issue here, the TO ask what he can do if he don't wont to hear all of the text from wikipedia for example. And the text comes out from the TTS so in my opinion its exactly the issue, so letting the TTS run in a thread and give a neuron the ability to stop this thread, we are able to interrupt it.

@Sispheor
Copy link
Member

Sispheor commented Sep 6, 2018

The current design doesn't allow that. The LIFO for processing orders is unique. You cannot have two execution in the mean time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants