Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automatically add label to Bottlerocket nodes for agent scheduling #11

Open
jahkeup opened this issue Nov 11, 2019 · 6 comments
Open

Automatically add label to Bottlerocket nodes for agent scheduling #11

jahkeup opened this issue Nov 11, 2019 · 6 comments

Comments

@jahkeup
Copy link
Member

jahkeup commented Nov 11, 2019

What I'd like:

The update operator should automatically be eligible for scheduling on to Bottlerocket hosts in a Kubernetes cluster.

The suggested deployment uses a label to identify Bottlerocket hosts and schedule on them (ie: the bottlerocket.aws/platform-version label. Name may change: #4). Instead of requiring the label to be set by administrators, the label could be set (or determined) automatically for Bottlerocket nodes to eliminate the manual step.

@jahkeup
Copy link
Member Author

jahkeup commented Nov 11, 2019

Support for static node labels was added in bottlerocket-os/bottlerocket#366 so this should be possible to do with the API or another on-box process.

@webern webern transferred this issue from bottlerocket-os/bottlerocket Feb 26, 2020
@jahkeup jahkeup changed the title dogswatch: add platform-version label to Node from kubelet/host Automatically add label to Bottlerocket nodes for agent scheduling Feb 27, 2020
@jahkeup
Copy link
Member Author

jahkeup commented Feb 27, 2020

It might also be feasible to have a job scheduled onto new nodes to "query" it for the updater inferface and set the appropriate label (#4). This might be a tad unusual though and needs further thought + investigation 🤔

@rothgar
Copy link

rothgar commented Aug 31, 2020

The nodegroup can have labels applied to instances automatically to avoid needing to do it manually.

example eksctl config

---
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: bottlerocket2
  region: us-west-2
  version: '1.17'

nodeGroups:
  - name: ng-bottlerocket2
    labels: { bottlerocket.aws/updater-interface-version: 2.0.0 }
    instanceType: m5.large
    desiredCapacity: 3
    amiFamily: Bottlerocket

@jahkeup
Copy link
Member Author

jahkeup commented Aug 31, 2020

The nodegroup can have labels applied to instances automatically to avoid needing to do it manually.

I personally use the above method as well for short lived clusters! It's very handy.

There's a drawback to doing it this way: when an interface version bump is needed, you'd need to replace your nodes (after updating the template's userdata) or update settings via the API on each node. That said, the per-nodegroup labels works well and is convenient.

This issue is focused on the host "advertising" the correct interface version that's appropriate rather than globally setting this value (by way of a nodegroup-wide label). We'd need to propagate this data from the OS and add it to the kubelet's configured labels. The bottlerocket image would have the interface version label value built in and would be correct for any given build. This would eliminate the need to hand edit or otherwise update your nodes' updater-interface-version altogether!

@jpmcb
Copy link
Contributor

jpmcb commented Jan 19, 2023

I'm not sure there's a great way to add labels to only bottlerocket nodes from the controller's perspective. Bottlerocket doesn't really expose any metadata to the kubernetes API that we could reliably use to determine if a node is a bottlerocket node or something else.

For example, on one of my bottlerocket nodes:

❯ k describe nodes ip-192-168-141-233.us-west-2.compute.internal | rg bottle
  Container Runtime Version:  containerd://1.6.6+bottlerocket

the only metadata that is remotely bottlerocket related is a +bottlerocket buildtime version flag on containerd. And I don't think relying on containerd's version is a great idea since in the future, users may build bottlerocket with a different container runtime.

We'd need to propagate this data from the OS and add it to the kubelet's configured labels. The bottlerocket image would have the interface version label value built in and would be correct for any given build.

I think this is an interesting solution and wouldn't require too much work on the bottlerocket side. On building the kubernetes variants, we'd need to set the label in the kublet build.

@webern
Copy link
Member

webern commented Jan 20, 2023

wouldn't require too much work on the bottlerocket side

It looks like there isn't a lot to work with. I wonder if providerID is the right place to identify ourselves as a Bottlerocket node. https://kubernetes.io/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants