Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal: Host update TUF repo in cluster as kubernetes deployment #337

Open
jpmcb opened this issue Nov 7, 2022 · 1 comment
Open

Proposal: Host update TUF repo in cluster as kubernetes deployment #337

jpmcb opened this issue Nov 7, 2022 · 1 comment
Milestone

Comments

@jpmcb
Copy link
Contributor

jpmcb commented Nov 7, 2022

Background

Bottlerocket provides a way to set the settings.updates.metadata-base-url and settings.updates.targets-base-url. By default, these both point to updates.bottlerocket.aws. This URL is the public TUF repository which contains metadata for performing updates and is utlized when querying for and completing new updates.

Currently, the Brupop agents utilize their host's apiclient to interface directly with the updater API. When querying for / doing updates, each node running an agent will effectively call out to the updates.bottlerocket.aws/ endpoint.

In typical Kubernetes network deployments, this may be fine, but in a more locked down network, this means that every single bottlerocket node will need to have a network rule that allows egress to updates.bottlerocket.aws/

A few solutions:

Provide the TUF repo in cluster option

Bottlerocket has a documented process of downloading the public TUF repository and deploying it in order to perform an update. Brupop could provide a kubernetes deployment that automatically fetches (and frequently queries for) the latest update to the public TUF repository.

Users could then deploy this alongside Brupop and update their node's settings to point to the in cluster address. Something like:

"targets-base-url": "br-tuf-repo.brupop-bottlerocket-aws.svc.cluster.local"

This would likely also require users to configure a storage interface in order to host the tuf repository in cluster.

Ultimately, the goal here would be to enable customers to isolate / taint these pods hosting the TUF repo to a few quarantined "edge" nodes that have egress access to updates.bottlerocket.aws (instead of the entire cluster running Bottlerocket nodes).

The self managed proxy option

Similar to the above, users could deploy their own proxy (like Nginx or Squid) and update their bottlerocket node settings to point to that proxy. This way, again, users could apply a taint to the proxy node to only exist on a few quarantined "edge" nodes that have network access to egress out to updates.bottlerocket.aws.

A downfall here would be the need for possible constant network access through the proxy (unless some caching solution is also implemented)

If we don't want the heavy lift of another deployment as part of the brupop solution, we should document for users how to do this type of proxy.

@stormmore
Copy link

We are in one of those restrictive environment currently. We are pushing to have our network security team open up the firewall to the updates.bottlerocket.aws endpoint but will get some push back as it is such a big range of IPs we are opening up for it.

@jpmcb jpmcb added this to the Backlog milestone Nov 15, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants