New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Separate proxy and controller Deployments #921
Comments
tl;drUsing an umbrella chart results in a number of limitations on template This requires some redesign of the values.yaml to support configuring the
#923 is the companion to this analysis. I do not think we should attempt to support both models, as designing good BackgroundThe current design of the Migrations Jobs often also share many of these settings even though they're a To split Kong and the controller into separate Deployments, we thus need to values.yaml organizationThe chart has grown organically over time, with historically less-informed
we added one of the root-level annotations keys early, before we needed the Mirrored and consolidated Kubernetes resource configurationSince adding a second Deployment will require separate configuration keys for
This section should hold all configuration that lives under the Kubernetes This section may hold configuration related to Deployment-associated resources, Application-level configurationWe have several sections dedicated to application-specific configuration While these generally manipulate values in a container's environment, they Migration strategiesMoving keys is a breaking change for existing settings. If we move We can provide a helper script, however, as looking up a YAML key and moving it We could alternatively continue to support existing keys, but this comes with We can opt to use the newest organization pattern for new settings only (for Existing release migrationAfter changes to values.yaml, users will need to upgrade releases. kong chart usersGoing from the AFAIK the main task here is to handle the change from an internal localhost Multiple release kong chart usersAlthough some classes of separate release (namely separate controller releases) These users still need to migrate values.yaml and set up discovery if they were Although the multi-Deployment configuration would make single-release hybrid ingress chart usersThis state has no perfect migration path. Helm, broadly, cannot transfer Migrating existing Since Template refactoringUsing mirrored structure for the Kubernetes resource configuration should It may be simple enough to just inject the environment variable set generated |
👍 I like the idea. Make a clear end of support date for 2.x for users to align the expectations and huge links to the 3.x and the migration guide.
That's the way I've done it in the past. Spoiler alert: it's always more work than anticipated.
Tempting... but no. This will end badly. Fiddly is an understatement. What I'm slightly concerned about is the testing situation. We had 1 approach to golden tests in #748 but that never took off. We could revive that idea and use it as we progress with the refactor and reorganisation of keys in values to ensure we're getting the templates output that we're expecting. |
Any particular pitfalls you hit, or just that some items that needed migrations were missed initially? |
If it's 1:1 move without transformation then it should be OK-ish, with any sort of transformations or pivoting it's annoying to get it right the first time |
Since we are (along with simply rearranging settings) moving users from sidecar to gateway discovery, the admin API must be exposed, which is not something existing We did unfortunately omit restrictions on its usage in |
If we take lessons learnt from https://docs.google.com/document/d/14jwnYoSj5RGvOCaWGpPuqa1VvyjKnFmscnYpiIxfXQ0/edit#heading=h.csub82xg1smh and (haven't figured it out yet) generate certs via helm's Currently we're using
which wouldn't work with Gateway Discovery because
We could then flip the tls skip verify to |
Practical update testing with a (slightly outdated) #947 indicates that the ingress->kong migration actually lacks one of the major expected hurdles: Helm just doesn't care if you change the chart used for a release. Running something like
is entirely acceptable and functions no differently than upgrading to a new version of the same chart. Migrating an Changes to resource names do, however, result in issues. The default names for Deployments and Services in
This doesn't yet cover the introduction of mutual TLS between the controller and admin API, which I'd like to have in place by default, and which will probably introduce additional complications. Values migration from |
values.yaml migration strategy as of https://github.com/Kong/chart-migrate/tree/71146f9534e74a8195d205efa5e13d152db99b60
We can run the root migrate command with a separate key list to handle 1 and 2. From there, a second
|
Following up on mTLS now that the split is staged. Some discussion earlier at #921 (comment) The existing chart-managed mTLS implementation from #780 is a bit odd: Configuration-wise, the controller and proxy configuration is not integrated. You can set the controller to use a generated certificate and CA, but the proxy will not require it unless you manually specify its name in the admin API configuration. Implementation-wise, the admin Service template handles controller certificate generation and part of the proxy mount, though again, these systems do not integrate with one another. The proxy mount (from the admin API configuration) uses an unusual way to handle the configuration options. This implementation approach hits a chicken-egg problem with the generated certificate, since the I kinda want to simplify the UX and implementation here. Ideally we'd only use a single section, since there shouldn't be any reason to configure different CAs for the admin API and controller. I don't know that I'd want to make a breaking change for this, however, and it's not something where we could simply migrate key locations. On the implementation side, dropping support for providing the CA string in values.yaml would allow us to use a normal Secret mount--I figure |
I'll try to bring some context on why some things were done the way they are.
Originally #780 was implemented with split releases in mind as that was the only way to run Gateway Discovery at that time. Because of that, we couldn't make any validation between proxy and controller configuration.
The idea was that one of the releases must be responsible for generating the key-pair Secret and the controller was designated to be that one. Because of that, you had to first deploy the controller release and then use the generated secret name in the gateway release (or the name you hardcoded in
|
Kong/kubernetes-ingress-controller#4671 talks about chart stuff but isn't in this repo (presumably for milestone reasons). This is a stub to provide charts tracking of the same.
The text was updated successfully, but these errors were encountered: