Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to find out whether a pod belongs to statefulset/deployment/replica-set #78181

Closed
Cherishty opened this issue May 21, 2019 · 10 comments
Closed
Labels
kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery.

Comments

@Cherishty
Copy link

how can I use kubenetes client-sdk to figure out whether a pod belongs to statefulset/deployment/replica-set?

the same issue as this, but that method does not work, also kubectl describe pod is not the same as kubectl get pods

What I can find is, for a pod belong to a high-level controller, it will have metadata.generateName and metadata.ownerReferences. Unfortunately, it is not supported via Field Selectors.

So how can I filter it out?

@Cherishty Cherishty added the kind/support Categorizes issue or PR as a support question. label May 21, 2019
@k8s-ci-robot k8s-ci-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label May 21, 2019
@Cherishty
Copy link
Author

Cherishty commented May 21, 2019

/sig API-Machinery

@k8s-ci-robot k8s-ci-robot added sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels May 21, 2019
@zq-david-wang
Copy link

I would use selector(pod's label) to retrieve all relevant replica set(kind field from pod's owner reference), and then use owner reference to further narrow it down

@yue9944882
Copy link
Member

// API version of the referent.
APIVersion string `json:"apiVersion" protobuf:"bytes,5,opt,name=apiVersion"`
// Kind of the referent.
// More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
Kind string `json:"kind" protobuf:"bytes,1,opt,name=kind"`

the in-tree fields from metadata.ownerReferences will help

@Cherishty
Copy link
Author

// API version of the referent.
APIVersion string `json:"apiVersion" protobuf:"bytes,5,opt,name=apiVersion"`
// Kind of the referent.
// More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
Kind string `json:"kind" protobuf:"bytes,1,opt,name=kind"`

the in-tree fields from metadata.ownerReferences will help

Yes, this is what I described in last post, but how can I filter it via k8s pysdk, or only have to list all pod and resolve it by myself?

@yue9944882
Copy link
Member

or only have to list all pod and resolve it by myself?

sadly, yes. the good news is that we can do pagination so that list chunks won't jam the traffic.

@YoubingLi
Copy link

YoubingLi commented Jun 3, 2019

Pod has UID attribute.

If the pod belongs to other object such as Job, Deployment, statefulset, the "controller-uid" is set in Pod's Labels.

You can compare these 2 fields.

func getPodControlID(pod *corev1.Pod) string {

    if pod.Labels == nil {
            return string(pod.UID)
    } else {
            uuid, ok := pod.Labels["controller-uid"]
            if ok { 
                    return uuid
            } else {
                    return string(pod.UID)
            }
    }

}

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 1, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 1, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery.
Projects
None yet
Development

No branches or pull requests

6 participants