You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What happened:
PodGroup cannot Update phase from Scheduling to Scheduled when gangMembers larger than two。
Regardless of whether all gangMembers have been successfully scheduled and running
What you expected to happen:
when all gangMembers have been successfully scheduled and running,PodGroup should Update phase from Scheduling to Scheduled
How to reproduce it (as minimally and precisely as possible):
Enable gang scheduling, create job with more than one two pods. The more pods, the greater the probability of occurrence
Anything else we need to know?:
Occurrence version: 1.3.0
PeterChg
changed the title
[PodGroup update phase Error] PodGroup cannot Update phase from Scheduling to Scheduled when gangMember larger than one
[PodGroup update phase Error] PodGroup cannot Update phase from Scheduling to Scheduled when gangMember larger than two
May 7, 2024
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
What happened:
PodGroup cannot Update phase from Scheduling to Scheduled when gangMembers larger than two。
Regardless of whether all gangMembers have been successfully scheduled and running
....
status:
phase: Scheduling
running: 4
scheduleStartTime: "2024-05-06T13:59:43Z"
scheduled: 3
What you expected to happen:
when all gangMembers have been successfully scheduled and running,PodGroup should Update phase from Scheduling to Scheduled
How to reproduce it (as minimally and precisely as possible):
Enable gang scheduling, create job with more than one two pods. The more pods, the greater the probability of occurrence
Anything else we need to know?:
Occurrence version: 1.3.0
Environment:
kubectl version
): 1.22.3The text was updated successfully, but these errors were encountered: