Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance differences between AppArmor Enforcer and BPF Enforcer #1639

Open
dejavudwh opened this issue Feb 17, 2024 · 8 comments
Open

Performance differences between AppArmor Enforcer and BPF Enforcer #1639

dejavudwh opened this issue Feb 17, 2024 · 8 comments
Labels
enhancement New feature or request

Comments

@dejavudwh
Copy link

I have observed the performance differences between AppArmor Enforcer and BPF Enforcer here. The experiments indicate that the performance of BPF Enforcer is slightly worse. I am very curious about the reasons behind this. Could you please explain it to me?

@dejavudwh dejavudwh added the enhancement New feature or request label Feb 17, 2024
@daemon1024
Copy link
Member

daemon1024 commented Feb 19, 2024

Hey @dejavudwh, We have worked towards improving our performance on BPF Enforcer. The numbers are much different now as well as much more in favour of BPFLSM.

The updated benchmarking guide is at https://github.com/kubearmor/KubeArmor/wiki/Kubearmor-Performance-Benchmarking-Guide

And we are soon planning to publish the numbers as well. Thanks 🙌🏽

@dejavudwh
Copy link
Author

@daemon1024 Thank you so much for your response. I was wondering if it might be possible to get some preliminary figures? Specifically, I'm curious about the percentage of performance consumed when using the BPF Enforcer.

@daemon1024
Copy link
Member

@dejavudwh It's anywhere between 1% to 8% depending on what kind of policies and what kind of workloads.
Process Only Policies have least overhead
Process and Network Moderate
If we have File rules it involves the most overhead.

If we compare it to AppArmor the percentage is anywhere between 3% to 12%, because with AppArmor we have an additional overhead of enabling Visibility in System Monitor which is not a requirement with BPFLSM since it can generate alert on it's own.

Hope that helps 🙌🏽

@dejavudwh
Copy link
Author

@daemon1024 Thank you very much, this is very helpful for us!

@daemon1024
Copy link
Member

@dejavudwh Anytime 😁
Anything else we can help with?
If something is not clear, Feel free to join us on our Slack
Slack

Also It would be great to know, How are you using KubeArmor? We maintain a ADOPTERS.md and would love to hear the use-cases.

@dejavudwh
Copy link
Author

@daemon1024
Thank you very much. I'm personally very interested in KubeArmor, but our lab hasn't really used KubeArmor in a production environment yet, and is still doing extensive research on related projects.

@dejavudwh
Copy link
Author

@dejavudwh It's anywhere between 1% to 8% depending on what kind of policies and what kind of workloads. Process Only Policies have least overhead Process and Network Moderate If we have File rules it involves the most overhead.

If we compare it to AppArmor the percentage is anywhere between 3% to 12%, because with AppArmor we have an additional overhead of enabling Visibility in System Monitor which is not a requirement with BPFLSM since it can generate alert on it's own.

Hope that helps 🙌🏽

@daemon1024 Thank you very much for your response. I would like to inquire further. I am curious to know whether these test results are derived from actual operational environments, such as stress testing commonly used software like MySQL and Nginx, with a comparison of their QPS. Alternatively, are the results specific to particular tests, like those in osbench, involving continuous file read-write operations and continuous process creation? I truly appreciate your guidance.

@daemon1024
Copy link
Member

@dejavudwh Here's the final Benchmarking datasheet

https://github.com/kubearmor/KubeArmor/wiki/KubeArmor-Performance-Benchmarking-Data

The benchmarking is based on actual operational environment. https://github.com/GoogleCloudPlatform/microservices-demo/

It involves scaled applications. And performance differences are measured on throughput.

We did not go through specific operation route because then the results could be skewed and not reflective of actual environments.

You can check our benchmarking process at https://github.com/kubearmor/KubeArmor/wiki/Kubearmor-Performance-Benchmarking-Guide

Would love to have feedback on the process and anything we can improve to better reflect actual numbers.

I'm personally very interested in KubeArmor, but our lab hasn't really used KubeArmor in a production environment yet, and is still doing extensive research on related projects.

Glad to know that and appreciate your interest. Happy to help out with your research at lab if you need anything specific.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants