-
Notifications
You must be signed in to change notification settings - Fork 312
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Coscheduling Timeout Cannot exceed 15m #1919
Comments
I look forward to your reply, and I would be happy to work with you to solve this problem. |
Can you exec |
In your example, PodGroup A has configured scheduleTimeoutSeconds as 10, so in theory PodGroupA will be timeout after 10 seconds. However, in our current implementation, the timeout configuration of PodGroup just means the max wait time since first pod comes to permit stage, and won't be persisted as podgroup/pod status in apiserver and won't also block pod scheduling process. So would you like give me more detail message abount why pod is unschedulable. |
Sorry, there is an error in the yaml I provided, I will fix it later and provide more information |
|
/cc @ZiMengSheng |
can you give me scheduler log abount why pod-example1 coscheduling prefilter failed, current Prefilter failed message is a little confusing due to known kube-scheduler bug. |
i make a test and have got the point. PodGroup default/a has total number of 10 and min number of 1.
In our example, pod-example1 gets rejected due to timeout of waiting PodGroupB. Scheduling cycle of pod-example1 is added to 1 after prefilter. When pod-example1 enter into scheduling cycle next time, gang scheduling cycle won't be added because num(child of which schedule cycle equals gang scheduling cycle) is one < totalChildrenNumber. thus Prefilter failed. New schedule cycle will never arrive until you submit enough children of PodGroupA. So just submit all children? |
@ZiMengSheng in goup/a I specified the number of minmembers to be 1. If I still need to increase the number of Pods, this is not consistent with my expectation. |
OK, you opinion are right and welcome. There are some inconsistencies in the design. We need to fix it in the code and design doc. Do you have the time and interest to fix it? |
I'd love to fix it. But I don't have a specific idea of how best to fix it. We also want to hear from the community. |
Welcome to contribute! Just do it! |
@ls-2018 any updates? ;) |
What happened:
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
1.yaml
2.yaml
Anything else we need to know?:
Environment:
kubectl version
):The text was updated successfully, but these errors were encountered: