Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reduce gnss_syncro_monitor CPU load #662

Open
wants to merge 1 commit into
base: next
Choose a base branch
from

Conversation

vladisslav2011
Copy link
Contributor

@vladisslav2011 vladisslav2011 commented Sep 6, 2022

For some unknown reason gnss_synchro_monitor stays in a tight loop trying
to catch all syncros and sometimes consumes the whole CPU core.
Triggering the gnss_syncro_monitor from sample counter as it is done
with observables greatly reduces CPU usage from ~60...90% to 1..2%.

CPU usage.

Before:
gnss-syncro-monitor-bug

After:
gnss-synchro-monitor-fixed

Fixes #661

Copy link
Contributor

@jwmelto jwmelto left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can believe that this improves performance (your evidence is compelling) but I'm not sure this is a quality fix. It may be; the case just hasn't been made.

src/core/monitor/gnss_synchro_monitor.cc Show resolved Hide resolved
src/core/monitor/gnss_synchro_monitor.cc Show resolved Hide resolved
src/core/receiver/gnss_flowgraph.cc Show resolved Hide resolved
@jwmelto
Copy link
Contributor

jwmelto commented Oct 28, 2022

I still don't like the brittleness of this approach, but I am not sure there is a better one. I see you had to increment the channels to the ctor of the Acq monitor; the same change is needed (most likely) for the Tracking monitor.

Arguably, this could be moved to the ctor, so callers don't have to remember/know about this "extra" port (this is part of the brittleness; Or, you could just fix the tracking monitor as well.

@vladisslav2011
Copy link
Contributor Author

Thanks.
I've updated tracking monitor creation/connection.

I see at least 2 different solutions to this problem:

  1. Pass synchros to the monitor as messages
  2. Modify gnss_synchro_monitor::general_work to sleep for some time if all input buffers are empty

@jwmelto
Copy link
Contributor

jwmelto commented Nov 3, 2022

Thanks. I've updated tracking monitor creation/connection.

I see at least 2 different solutions to this problem:

  1. Pass synchros to the monitor as messages

This would be such a fundamental change to architecture, it's unlikely to be accepted.

  1. Modify gnss_synchro_monitor::general_work to sleep for some time if all input buffers are empty

The default scheduler behavior should be to sleep for 250ms; however, I suspect that forecast is subverting this.

I don't have a problem with the time strobe being an input, but the N+1 th is a maintenance burden. I don't know if there is a way to make a different "class" of input, to make it clear that it's different, or worst case, move it to the 0th input. It still creates a brittle reliance on external users to configure it correctly, and it's too easy to get wrong.

For some unknown reason gnss_synchro_monitor stays in a tight loop trying
to catch all syncros and sometimes consumes the whole CPU core.
Triggering the gnss_syncro_monitor from sample counter as it is done
with observables greatly reduces CPU usage from ~60...90% to <1%.

Signed-off-by: Vladislav P <vladisslav2011@gmail.com>
@jwmelto
Copy link
Contributor

jwmelto commented Jan 6, 2023

@carlesfernandez can this be merged?

@wkazubski
Copy link
Contributor

wkazubski commented Jan 13, 2024

@carlesfernandez can this be merged?

I an using tkhis patch. It reduces CPU load significantly. Without the patch I had load burst to 100% when monitoring is enabled, afer the patch is applied the CPU load is stable 25-30% while receiving GPS L1 signal with rtl-sdr dongle.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants