Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

5ms MAX_IDLE_WAIT in xi_rpc::next_read leads to unwanted CPU utilization #1320

Open
kennylevinsen opened this issue Dec 29, 2021 · 3 comments

Comments

@kennylevinsen
Copy link

xi_rpc::next_read currently implements naïve busy-polling to service multiple task sources in a non-blocking manner. This is done by attempting to read from the peer with a very short timeout, and if the timeout is met, try to service other "background tasks".

A more appropriate implementation would either be a proper poll that blocks indefinitely on all sources, or something along the lines of an mpsc select! across multiple channels.

On an ultrabook idling at 800MHz, a busypoll with 5ms timeout leads to ~1% reported CPU utilization, which is a lot for doing nothing.

@cmyr
Copy link
Member

cmyr commented Jan 3, 2022

Xi is not under active development, so I wouldn't expect this to be fixed in the near future, but I agree that this is an inelegant aspect of the design.

@kennylevinsen
Copy link
Author

Indeed. In this case it was discovered due to the use of Xi components in lapce.

@cmyr
Copy link
Member

cmyr commented Jan 4, 2022

ah gotcha, I do need to look into that project in more detail, I have aspirations for having a better text-editing component in druid.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants