Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updated kuberentes pod operator to find pods only in running state #39702

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

paramjeet01
Copy link

The proposed changes to the find_pod method will ensure it selects only pods that are in the running state, effectively ignoring pods that are in the terminating state. This adjustment addresses the issue of pods remaining in the terminating state for an extended period.

closes: #39096
related: #39096


Copy link

boring-cyborg bot commented May 18, 2024

Congratulations on your first Pull Request and welcome to the Apache Airflow community! If you have any issues or are unsure about any anything please check our Contributors' Guide (https://github.com/apache/airflow/blob/main/contributing-docs/README.rst)
Here are some useful points:

  • Pay attention to the quality of your code (ruff, mypy and type annotations). Our pre-commits will help you with that.
  • In case of a new feature add useful documentation (in docstrings or in docs/ directory). Adding a new operator? Check this short guide Consider adding an example DAG that shows how users should use it.
  • Consider using Breeze environment for testing locally, it's a heavy docker but it ships with a working Airflow and a lot of integrations.
  • Be patient and persistent. It might take some time to get a review or get the final approval from Committers.
  • Please follow ASF Code of Conduct for all communication including (but not limited to) comments on Pull Requests, Mailing list and Slack.
  • Be sure to read the Airflow Coding style.
  • Always keep your Pull Requests rebased, otherwise your build might fail due to changes not related to your commits.
    Apache Airflow is a community-driven project and together we are making it better 🚀.
    In case of doubts contact the developers at:
    Mailing List: dev@airflow.apache.org
    Slack: https://s.apache.org/airflow-slack

@paramjeet01
Copy link
Author

@dirrao , please help to review the above changes.

@eladkal
Copy link
Contributor

eladkal commented May 18, 2024

This adjustment addresses the issue of pods remaining in the terminating state for an extended period.

Is that right?
Completed is also terminated state is it not? Why reataching to completed pod is considered wrong?

@paramjeet01
Copy link
Author

paramjeet01 commented May 18, 2024

@eladkal , When a node is drained in eks due to aws ec2 spot instance interruption or due to some other reasons , usually controllers will detect that the pods have been terminated and will schedule new pods on other available nodes to maintain the desired number of replicas but in our our case airflow deploy pods without controller. So , sometimes the pods stuck in terminating state for a while till kubernetes reconsiders it and terminates after a node is terminated

The issue I'm trying to address is the above case where the pods are stuck in terminating state and not the completed ones , the find_pod method is used to get pods based on labels and later used in cleanup method.

Copy link
Collaborator

@dirrao dirrao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@paramjeet01
We have retries with retry_delay of 5 mins. If the pod is still stuck in terminating state or whatever, then Kubernetes cluster is not working as expected. I believe we can't handle all these kind of situations in the airflow. Better to fix this issue in Kubernetes.
@jedcunningham / @hussein-awala WDYT?

Comment on lines 606 to 608

# get remote pod for use in cleanup methods
self.remote_pod = self.find_pod(self.pod.metadata.namespace, context=context)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do you need to move this block? The value of remote_pod is used in the previous block

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moved remote_pod after await_pod_start , so that the find_pod method will properly choose the running pods as pending pods won't be picked up by find_pod method. Pardon , I just now saw that we implemented callbacks, I'll move the block below.

@paramjeet01
Copy link
Author

@paramjeet01 We have retries with retry_delay of 5 mins. If the pod is still stuck in terminating state or whatever, then Kubernetes cluster is not working as expected. I believe we can't handle all these kind of situations in the airflow. Better to fix this issue in Kubernetes. @jedcunningham / @hussein-awala WDYT?

@dirrao Agreed , We can check the kubernetes and solve this issue but we have an existing problem of creating pods on each retry and not cleaned up properly when the find pod returns more than one pod.

@@ -532,11 +532,12 @@ def find_pod(self, namespace: str, context: Context, *, exclude_checked: bool =
).items

pod = None
num_pods = len(pod_list)
running_pods = [pod for pod in pod_list if pod.status.phase == "Running"]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about the pod which is in pending state? It will result in launching another application pod which is not need.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Initial commit , I have set find_pod block after await_pod_start so we'll have only running pods but removed it later as recently introduced callbacks uses remote_pod for on_pod_creation in execute_sync. Now , I have added support for pending pods.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area:providers provider:cncf-kubernetes Kubernetes provider related issues
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Pods are still running even after receiveing SIGTERM Terminating subprocesses
4 participants