New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[feature] Automatically set GOMAXPROCS to match Linux container CPU quota in runtime #24
Comments
我拿 storage snapshot 的 CRUD 接口做了测试,docker 分配 2 核 CPU,benchmark 分别测试了
2 和 4 的结果相差不大,但是都明显优于 1 /cc @supereagle |
是在本机上测的么,建议上服务器试试看 跑在核数多的服务器上这个问题可能会更严重,因为 GOMAXPROCS 会更大 |
我看我们的服务器都是 4 核的,和我本机一样,在服务器上跑很麻烦,我还要拷代码上去 |
确实影响不大, 2和4优于1,docker开了2个CPU,设置为1的时候浪费了1个CPU |
服务器是 4 核的?哦那没事了。。。 |
更容易出现的情况是这样的:比如一个 Node 有 4 个 CPU,其中任意一个 Pod 设置 cpu hard limit 为 1,GOMAXPROCS 会被设置成 4,Go Runtime 创建 4 个 P,但其实只有一个能用。4 个 M 就会争抢 P,在 CPU Bound 的任务里影响挺大的(亲测算质数任务 MAXPROCS=4 比 MAXPROCS=1(理论最优设置)慢一倍) |
Is this a BUG REPORT or FEATURE REQUEST?:
/kind feature
What happened:
It will be huge overhead for CPU bound tasks if GOMAXPROCS does not match CPU hard limit in K8s. Maybe we can use https://github.com/uber-go/automaxprocs to auto configure it.
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
The text was updated successfully, but these errors were encountered: