New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature Request: Create a node's automatic node labels on its pods #62078
Comments
/kind feature |
/sig scheduling (?) |
/sig node |
Related: #61906 |
@discordianfish whatever way works is best for me :) Getting access to the node labels natively works for me! Thanks! |
In the meantime maybe we could collaborate on a docker image that implements this feature as init container? Preferrably using a k8s lib rather than bash. Env and/or args would tell which node labels to transfer to which labels/attributes. Init containers lend themselves nicely to composition. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Even the Kubernetes own cluster-autoscaler would benefit from a simple way to get i.e. the AWS_REGION - kubernetes/autoscaler#1208 |
@frittentheke Yes, that's what brought me here. Though I feel like this should be part of the downyard API, so personally would close this issue and focus on #40610 instead. |
@solsson Hi, regarding your comment I've recently stumbled upon the same issue. Here's how I've dealt with it: https://gist.github.com/gmaslowski/117f3535173d733e007d0c6c83564888 |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
I'd say we close this one and focus on #40610 instead. That seems to be the more idiomatic way to do this. |
I just banged my head against the wall when I realized the downward API doesn't support pulling node labels into pod envs. I've refactored it a bit, mateuszdrab/envars-from-node-labels@8919629, let me know what you think. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
Is this a BUG REPORT or FEATURE REQUEST?:
/kind feature
What happened:
Cannot tell Kafka broker pod which failure domain it is in for rack-awareness.
What you expected to happen:
Pods would inherit these labels from the node:
https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#interlude-built-in-node-labels
How to reproduce it (as minimally and precisely as possible):
N/A
Anything else we need to know?:
This approach would be a kind of alternative to these:
Also relevant:
Environment:
kubectl version
): v1.10.0uname -a
):The text was updated successfully, but these errors were encountered: