Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Version update control with GitOps #56

Open
springroll12 opened this issue Feb 21, 2021 · 6 comments
Open

Version update control with GitOps #56

springroll12 opened this issue Feb 21, 2021 · 6 comments

Comments

@springroll12
Copy link

Issue or Feature Request:

Is it possible to upgrade only when prompted by a change to a kubernetes manifest? Automated upgrades are great, but it would be nice to track and trigger these through some auditable process. Ideally we could have a Bottlerocket CRD and change only the version number, flavor or kubernetes version to trigger an upgrade.

My current understanding is that this is not possible?

@springroll12
Copy link
Author

Is this possible? I think it would be difficult to recommend this operator for production use if you cannot control when updates happen.

@jhaynes jhaynes added this to the Backlog milestone May 21, 2021
@jhaynes
Copy link
Contributor

jhaynes commented May 21, 2021

Rather than extending this operator with this functionality, we are considering building a settings operator to accomplish this instead. Would that fit your use case?

@springroll12
Copy link
Author

springroll12 commented May 22, 2021

It may, yes. As long as the settings operator can perform full OS updates similar to what's done in this operator.

Ideally there would be a Bottlerocket CRD that defines all the settings (maybe some of what's in the userdata currently) that you can just apply to your cluster. eksctl essentially has the structure of this in their bottlerocket setup yaml already.

I'm not sure how it would work for cluster bootstrapping though, as the CRD (by necessity) would have to be applied after the nodes are created. It would probably have to be able to handle version downgrades as well.

I am concerned that the settings operator (or this one) doesn't seem to be a priority though. Also it should not use SSM as mentioned in that thread otherwise it won't be portable. It should just spin up a pluggable admin/control container to perform the API actions.

@springroll12
Copy link
Author

springroll12 commented Feb 8, 2022

I just want to congratulate the brupop team on the release of v0.2.0! Thanks for all the hard work putting it together and I am very excited to give it a try! I believe this release will address this issue so it could be closed?

@cbgbt
Copy link
Contributor

cbgbt commented Feb 8, 2022

Hello. Many thanks for the kind words, and we certainly welcome any feedback if you do happen to try brupop 0.2.0!

Unfortunately, I think we still want to keep this issue open. While brupop uses custom resources internally to coordinate updates between the operator and individual nodes, these resources don’t provide a great interface for cluster administrators to orchestrate moving to specific versions at a given time.

Architecturally, I think that we still want to accomplish this via the settings operator that was previously discussed. This is something that the team is exploring as a future deliverable. Ultimately, using settings as a single entry point for this mechanism seems like the option that provides the best experience, and I’m hesitant to implement an interface that could interact confusingly with that vision.

In the meantime though, a possible workaround could be to use AWS Systems Manager or a similar automation tool to set settings.updates.version-lock in the Bottlerocket API across your fleet to the specific desired version when you are prepared to upgrade to it. In particular, State Manager provides tools that should provide useful update velocity controls. While this isn’t as ideal as a Kubernetes-native solution, it should have the desired effect.

@springroll12
Copy link
Author

Thanks for clarifying.

I don't think a solution based on the settings.updates.version-lock would work too well in cases where the cluster-autoscaler is enabled. If a new node is added we would need a way to trigger the version-lock. In our case (and I would wager many others) the initial bottlerocket version is specified in some infrastructure code (ie. Terraform) which means to force new nodes to obey the new version-lock we would also need to alter the user-data which forces recreation of all nodes anyway.

Is there some provision for new nodes that join the cluster in the settings operator or brupop design? It would be difficult to sell the idea that if a new node joins the cluster, it has to look up its own version and then restart again to match the rest of the cluster nodes. In that case it makes more sense to just provision new nodes with an updated bottlerocket version and drain and remove the old ones.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants