You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I created a job which has just one task and one pod. After the job, podgroup and pod are running phase, i update pod and then delete job almost at the same time. The operation may cause the job to be deleted at etcd, but it is still present at volcano controller cache. So, the K8s garbage-collector deleted the pod due to its owner having deleted it, and the volcano controller created the pod due to job alway in cache.
As the user can see, pods are repeatedly created and deleted.
What you expected to happen:
volcano controller removes data from cache completely after job has deleted, and doesn't need to re-add to cache while updating pod.
How to reproduce it (as minimally and precisely as possible):
Just update pod and delete job as close to the time as possible.
What happened:
I created a job which has just one task and one pod. After the job, podgroup and pod are running phase, i update pod and then delete job almost at the same time. The operation may cause the job to be deleted at etcd, but it is still present at volcano controller cache. So, the K8s garbage-collector deleted the pod due to its owner having deleted it, and the volcano controller created the pod due to job alway in cache.
As the user can see, pods are repeatedly created and deleted.
What you expected to happen:
volcano controller removes data from cache completely after job has deleted, and doesn't need to re-add to cache while updating pod.
How to reproduce it (as minimally and precisely as possible):
Just update pod and delete job as close to the time as possible.
Anything else we need to know?:
controller parameters:
Environment:
kubectl version
): v1.20uname -a
):Linux 5.10.0-103-bili-coloThe text was updated successfully, but these errors were encountered: