-
Notifications
You must be signed in to change notification settings - Fork 13.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Updated kuberentes pod operator to find pods only in running state #39702
base: main
Are you sure you want to change the base?
Conversation
Congratulations on your first Pull Request and welcome to the Apache Airflow community! If you have any issues or are unsure about any anything please check our Contributors' Guide (https://github.com/apache/airflow/blob/main/contributing-docs/README.rst)
|
@dirrao , please help to review the above changes. |
Is that right? |
@eladkal , When a node is drained in eks due to aws ec2 spot instance interruption or due to some other reasons , usually controllers will detect that the pods have been terminated and will schedule new pods on other available nodes to maintain the desired number of replicas but in our our case airflow deploy pods without controller. So , sometimes the pods stuck in terminating state for a while till kubernetes reconsiders it and terminates after a node is terminated The issue I'm trying to address is the above case where the pods are stuck in terminating state and not the completed ones , the find_pod method is used to get pods based on labels and later used in cleanup method. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@paramjeet01
We have retries with retry_delay of 5 mins. If the pod is still stuck in terminating state or whatever, then Kubernetes cluster is not working as expected. I believe we can't handle all these kind of situations in the airflow. Better to fix this issue in Kubernetes.
@jedcunningham / @hussein-awala WDYT?
|
||
# get remote pod for use in cleanup methods | ||
self.remote_pod = self.find_pod(self.pod.metadata.namespace, context=context) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do you need to move this block? The value of remote_pod
is used in the previous block
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Moved remote_pod
after await_pod_start
, so that the find_pod
method will properly choose the running pods as pending pods won't be picked up by find_pod
method. Pardon , I just now saw that we implemented callbacks, I'll move the block below.
@dirrao Agreed , We can check the kubernetes and solve this issue but we have an existing problem of creating pods on each retry and not cleaned up properly when the find pod returns more than one pod. |
@@ -532,11 +532,12 @@ def find_pod(self, namespace: str, context: Context, *, exclude_checked: bool = | |||
).items | |||
|
|||
pod = None | |||
num_pods = len(pod_list) | |||
running_pods = [pod for pod in pod_list if pod.status.phase == "Running"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about the pod which is in pending state? It will result in launching another application pod which is not need.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Initial commit , I have set find_pod
block after await_pod_start
so we'll have only running pods but removed it later as recently introduced callbacks uses remote_pod
for on_pod_creation
in execute_sync
. Now , I have added support for pending pods.
The proposed changes to the find_pod method will ensure it selects only pods that are in the running state, effectively ignoring pods that are in the terminating state. This adjustment addresses the issue of pods remaining in the terminating state for an extended period.
closes: #39096
related: #39096