-
Notifications
You must be signed in to change notification settings - Fork 38.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubectl get
should have a way to filter for advanced pods status
#49387
Comments
/kind feature |
/sig cli |
Same here, It sound incredible to use a complex syntax to only list non running container... |
Ideally I would be able to say something like:
|
I had to make a small modification to
|
#50140 provides a new flag
/close |
|
@asarkar |
@dixudx thanks for the PR for the field-selector. But I think this is not what I had in mind. I wanted to be able figure out pods that have one or more container that are not passing the readiness checks. Given that I have non ready pod (kubectl v1.9.1)
This pod is still in Phase running, so I can't get it using your proposed filter:
|
/reopen |
Got the same issue. |
Hm, can I use it for getting nested array items? Like I want to do
But it returns error, tried |
@artemyarulin try |
Thanks, just tried with kubectl v1.9.3/cluster v1.9.2 and it returns same error - |
Due the test-flake I discovered that the termination helper didn't work as expected, and the status.phase is not represent at all (kubernetes/kubernetes#49387) Issue: ``` vagrant@k8s1:~$ kubectl delete pod testds-w7prl pod "testds-w7prl" deleted vagrant@k8s1:~$ kubectl get pods NAME READY STATUS RESTARTS AGE netcatds-bhxv4 1/1 Running 0 5m netcatds-zpzzl 1/1 Running 0 5m testclient-8qx59 1/1 Running 0 1m testclient-r9xmm 1/1 Running 0 1m testds-fwss5 1/1 Running 0 32s testds-w7prl 0/1 Terminating 0 1m vagrant@k8s1:~$ kubectl get pods -o "jsonpath='{.items[*].status.phase}'" 'Running Running Running Running^C vagrant@k8s1:~$ kubectl get pods NAME READY STATUS RESTARTS AGE netcatds-bhxv4 1/1 Running 0 5m netcatds-zpzzl 1/1 Running 0 5m testclient-8qx59 1/1 Running 0 1m testclient-r9xmm 1/1 Running 0 1m testds-fwss5 1/1 Running 0 40s testds-w7prl 0/1 Terminating 0 1m vagrant@k8s1:~$ ``` Signed-off-by: Eloy Coto <eloy.coto@gmail.com>
Due the test-flake I discovered that the termination helper didn't work as expected, and the status.phase is not represent at all (kubernetes/kubernetes#49387) Issue: ``` vagrant@k8s1:~$ kubectl delete pod testds-w7prl pod "testds-w7prl" deleted vagrant@k8s1:~$ kubectl get pods NAME READY STATUS RESTARTS AGE netcatds-bhxv4 1/1 Running 0 5m netcatds-zpzzl 1/1 Running 0 5m testclient-8qx59 1/1 Running 0 1m testclient-r9xmm 1/1 Running 0 1m testds-fwss5 1/1 Running 0 32s testds-w7prl 0/1 Terminating 0 1m vagrant@k8s1:~$ kubectl get pods -o "jsonpath='{.items[*].status.phase}'" 'Running Running Running Running^C vagrant@k8s1:~$ kubectl get pods NAME READY STATUS RESTARTS AGE netcatds-bhxv4 1/1 Running 0 5m netcatds-zpzzl 1/1 Running 0 5m testclient-8qx59 1/1 Running 0 1m testclient-r9xmm 1/1 Running 0 1m testds-fwss5 1/1 Running 0 40s testds-w7prl 0/1 Terminating 0 1m vagrant@k8s1:~$ ``` Signed-off-by: Eloy Coto <eloy.coto@gmail.com>
Sadly, the same thing happens for v1.9.4: What I'm trying to do here is to get all pods with a given parent uid...
Waiting anxiously for this feature •ᴗ• |
This filter string is not supported. For pods, only "metadata.name", "metadata.namespace", "spec.nodeName", "spec.restartPolicy", "spec.schedulerName", status.phase", "status.podIP", "status.nominatedNodeName", "sepc.nodeName" are supported. @migueleliasweb If you want to filer out the pod in your case, you can use
Also you can use JSONPath Support of kubectl. |
Thanks @dixudx . But let me understand a litle bit better. If I'm running this query in a cluster with a few thousand pods:
|
@migueleliasweb If For |
This work great for me with |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
In my case I have rolling update deployment in which the new replicaset is marked as
However, I have randomly encountered that the But after one second: Then when I use the go-lang method, I get the correct results: So as you noticed the field selector will not work because both will always return This behaviour is causing confusion. I could set a fixed delay of 3 seconds, but that is a very bad solution. Further inspections seems that during the termination grace period
|
You can do something like this to find pods that have a not-ready container: kubectl get pods -oyaml | yq e '.items[] | select(.status.containerStatuses[].ready | not) | .metadata.name' - Doesn't work so well on crons and one-off jobs but that's just to give you an idea if you're looking for a more powerful query selector.
kubectl get pods -ojson | jq '.items[] | select(.status.containerStatuses[].started | not) | [{name: .metadata.name, statuses: .status.containerStatuses}]' Or maybe: kubectl get pods -ojson | jq '.items[] | select(.status.containerStatuses[] | ((.ready|not) and .state.terminated.exitCode!=0)) | [{name: .metadata.name, statuses: .status.containerStatuses}]' |
Hello! A small fix to the above (very useful) comment.
The kubectl get po --all-namespaces | gawk 'match($3, /([0-9]+)\/([0-9]+)/, a) {if (a[1] < a[2] && $4 != "Completed") print}' P.S. I omitted You may also want to check this related script. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
For anyone who is looking to filter ❯ k get pod --field-selector=status.phase=Succeeded |
In case someone want to use it in an shell if-statement (example with tekton pipelinerun ressource, same behaviour with any other ressource depending on json): |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
This would still be very nice to have. |
/remove-lifecycle stale |
What happened:
I'd like to have a simple command to check for pods that are currently not ready
What you expected to happen:
I can see a couple of options:
kubectl get
to filter the output using go/jsonpathRunning&Ready
andRunning
How to get that currently:
The text was updated successfully, but these errors were encountered: