Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement RPM methodology (draft-02) #562

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

FelixGaudin
Copy link

Implementation of this issue which implement the RPM methodology.

The goal of this methodology is to measure responsiveness under working conditions.
This also measures the throughput while doing loaded latency.

If you have any questions about the methodology, please let me know. You can also follow this link to the slack https://join.slack.com/t/teamrpm/shared_invite/zt-1nt7d2yj8-bPnO8d3xwjkA7pbYRGq_Lw

@adolfintel
Copy link
Member

This looks very useful and well written.

I honestly don't have time test this thoroughly at this moment, write the documentation and update the backend but I don't want this to go to waste, I'll contact you soon.

@peacepenguin
Copy link

This works really well, thanks so much @FelixGaudin for putting this together. I'm using your fork in production already now to test for bufferbloat on my internal networks, in addition to the public bufferbloat test i also use regularly: https://www.waveform.com/tools/bufferbloat

The measurement units are a little rough for me to understand though. The RPM i can't quite grasp at all what it represents or what a 'good' rpm value is.

Then the "Factor of latency increase" i think means this: assume a non-loaded baseline ping latency of 10ms. Then a measured Download factor of latency increase of 2.0. I think means: while under the download test, the latency increased to 20ms.

The public 'waveform bufferbloat test' for example just lists the 'download latency +' value, the closer to 0 this is the better while under load.

I'll take a look at the code in a fork and see if i can add the raw 'added latency' values for up/down. I assume they're already being collected and used by your new code additions.

Then i also want to add the storage of all this new latency data into the database storage mechanism and the image generation of the result for sharing.

Functionality wise this seems to work great though! thanks!

@FelixGaudin
Copy link
Author

The RPM is the number of Round-trip time Per Minute. This metric is with the idea "bigger is better", this means that a poor value is like 300RPM (200ms) and a good value is high so like 2400RPM (25ms).

The idea of the factor of increase is to have a simpler metric. Because when you tell non-IT people that with your link you increase the latency by 50ms it's not easy for them to understand it. But if you say that the time taken is 5 times bigger that's more understandable.

Also, the difference is more sensitive to the unloaded value. For example, if you are going from 5ms (unload) to 25ms (loaded) the difference is 20. While going from 200ms (unload) to 220ms (loaded) the difference is still 20 but the impact is not the same.

So I think using a factor of latency increase can be better, but maybe there is a better way to name it or to present it.

Anyway, thank you a lot for your interest in this feature!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants