New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Facilitate ConfigMap rollouts / management #22368
Comments
cc @pmorie |
This is one approach. I still want to write a demo, using the live-update
|
@thockin Live update is a different use case than what's discussed here. |
I think live updates without restarts might fall under my issue, #20200. |
@caesarxuchao @lavalamp: We should consider this issue as part of implementing cascading deletion. |
Ref #9043 re. in-place rolling updates. |
Yeah I think it should be trivial to set a parent for a config map so it automatically gets cleaned up. (Why not just add a configmap template section to deployment anyway? Seems like a super common thing people will want to do.) |
@lavalamp, I guess you mean we can set replicas sets as the parent of a config map, and delete the config map when all the replica sets are deleted? |
@caesarxuchao Yes |
Recent discussion: |
Thinking out loud: In OpenShift we have the concept of triggers. For example when an image tag is referenced by a DeploymentConfig and there is a new image for that tag, we detect it via a controller loop and update the DeploymentConfig by resolving the tag to the full spec of the image (thus triggering a new deployment since it's a template change). Could we possibly do something similar here? A controller loop watches for configmap changes and triggers a new deployment (we would also need to support redeployments of the same thing since there is no actual template change involved - maybe by adding an annotation to the podtemplate?) |
Fundamentally, there need to be multiple ConfigMap objects if we're going to have some pods referring to the new one and others referring to the old one(s), just as with ReplicaSets. |
On Wed, Mar 30, 2016 at 01:56:24AM -0700, Michail Kargakis wrote:
(I posted the original mail in the thread on the google group) I think making a deployment is the best way, too. Because you can have a syntax But I'm not sure if the configmap should be updated, as you propose, or if it |
Sorry to bother again, but can this be tagged for milestone v1.3 and, maybe, a lower priority? |
@bgrant0607 ping? |
What work is needed, if we agree deployment is the best path ? On Tue, Apr 5, 2016 at 5:10 PM, rata notifications@github.com wrote:
|
@bgrant0607 no problem! I can help, yes. Not sure I can get time from my work and I'm quite busy with university, but I'd love to help and probably can find some time. I've never really dealt with kube code (I did a very simple patch only), but I'd love to do it :) Also, I guess that a ConfigMap can have several owners, right? I think right now it can be used in several RSs and that should be taken into account when doing the cascade deletion/GC (although maybe is something obvious). Any pointers on where to start? Is someone willing to help with this? PS: @bgrant0607 sorry the delay, it was midnight here when I got your answer :) |
If we manage kube internals with deployments, we have to find the right thing to do for both user-consumed configs and internals.
@bgrant0607 I also have the same Q here -- I think we will need to reference-count configmaps / secrets since they can be referred to from pods owned by multiple different controllers.
|
@rata, I'm working on cascading deletion and am putting together a PR that adds the necessary API, including the "OwnerReferences". I'll cc you there. |
@caesarxuchao thanks! |
On Wed, Apr 06, 2016 at 09:37:25AM -0700, Paul Morie wrote:
Sure, but I guess the same should probably work for boths, right? I imagine for example the "internals" configMaps to use a name like This way, when you upgrade the configmap will became orphan and it should be I think this can work for boths
Ohh, thanks! |
@caesarxuchao @rata We'll likely need a custom mechanism in Deployment to ensure that a referenced ConfigMap is owned by the generated ReplicaSets that reference it. |
@Fran-Rg https://github.com/stakater/Reloader/ is the tool you're looking for, but as @remram44 stated it is not a safe mechanism for rolling updates as it is best practice to keep your pods immutable. We're using a hash with helm and it works great, you don't even need to think of this problem anymore after you've implemented this solution within the chart. |
@dudicoco |
kustomize (built into kubectl) will also hash the config map, add the hash to it’s name and replace references to it across the project. Spinnaker has been doing the same thing for years. |
The hash is not always possible, we sometimes use a ConfigMap or Secret created out of the chart, currently we use The reloader operator as a workaround. |
@dudicoco any chance you'd write a blog about it? I'm working on implementing this as we speak it's been a rough journey. |
You'd think being the most upvoted issue (by far) would mean it has higher prioritization to bring it to the finish line. We have to give credit to @kfox1111 for driving associated KEP since 2019 💪🏼 |
for who @mubarak-j ? We are all individuals here, There is a KEP process, someone has to drive it. If it is a high priority for someone, they need to step up and run with it. please don't put the responsibility to someone else. |
BTW, there's a statistical effect here. Improvements that have been more popular may have garnered fewer upvotes, because they were so popular that people made time to work on them. The length of time that this has been important but not staffed has allowed it to receive reactions that actual shipped features didn't attract. If anyone wants to contribute on defining and implementing this enhancement, we've got guides at https://k8s.dev/ that are aimed at helping you learn how. PS the KEP for this is kubernetes/enhancements#3704 |
when is this feature planned to be released? |
This issue is already open for 7 years. |
Previously, environment variables were templated into a ConfigMap which was referenced through an `envFrom` in the deployment. Unfortunately, Kubernetes does not restart deployments on changes to their referenced ConfigMaps[1], so this indirection means that deployments have to be restarted manually every time a change is made - something that is very easy to forget in an otherwise GitOpsy workflow. [1] kubernetes/kubernetes#22368
Previously, environment variables were templated into a ConfigMap which was referenced through an `envFrom` in the deployment. Unfortunately, Kubernetes does not restart deployments on changes to their referenced ConfigMaps[1], so this indirection means that deployments have to be restarted manually every time a change is made - something that is very easy to forget in an otherwise GitOpsy workflow. [1] kubernetes/kubernetes#22368
Previously, environment variables were templated into a ConfigMap which was referenced through an `envFrom` in the deployment. Unfortunately, Kubernetes does not restart deployments on changes to their referenced ConfigMaps[1], so this indirection means that deployments have to be restarted manually every time a change is made - something that is very easy to forget in an otherwise GitOpsy workflow. [1] kubernetes/kubernetes#22368
Previously, environment variables were templated into a ConfigMap which was referenced through an `envFrom` in the deployment. Unfortunately, Kubernetes does not restart deployments on changes to their referenced ConfigMaps[1], so this indirection means that deployments have to be restarted manually every time a change is made - something that is very easy to forget in an otherwise GitOpsy workflow. [1] kubernetes/kubernetes#22368
Previously, environment variables were templated into a ConfigMap which was referenced through an `envFrom` in the deployment. Unfortunately, Kubernetes does not restart deployments on changes to their referenced ConfigMaps[1], so this indirection means that deployments have to be restarted manually every time a change is made - something that is very easy to forget in an otherwise GitOpsy workflow. The `environment_containing_json` does not seem to serve any purpose, but was probably copied from an old method of templating the ConfigMap using a for-loop instead of `toYaml`. [1] kubernetes/kubernetes#22368
Previously, environment variables were templated into a ConfigMap which was referenced through an `envFrom` in the deployment. Unfortunately, Kubernetes does not restart deployments on changes to their referenced ConfigMaps[1], so this indirection means that deployments have to be restarted manually every time a change is made - something that is very easy to forget in an otherwise GitOpsy workflow. The `environment_containing_json` does not seem to serve any purpose, but was probably copied from an old method of templating the ConfigMap using a for-loop instead of `toYaml`. [1] kubernetes/kubernetes#22368
Previously, environment variables were templated into a ConfigMap which was referenced through an `envFrom` in the deployment. Unfortunately, Kubernetes does not restart deployments on changes to their referenced ConfigMaps[1], so this indirection means that deployments have to be restarted manually every time a change is made - something that is very easy to forget in an otherwise GitOpsy workflow. The `environment_containing_json` does not seem to serve any purpose, but was probably copied from an old method of templating the ConfigMap using a for-loop instead of `toYaml`. [1] kubernetes/kubernetes#22368
Previously, environment variables were templated into a ConfigMap which was referenced through an `envFrom` in the deployment. Unfortunately, Kubernetes does not restart deployments on changes to their referenced ConfigMaps[1], so this indirection means that deployments have to be restarted manually every time a change is made - something that is very easy to forget in an otherwise GitOpsy workflow. [1] kubernetes/kubernetes#22368
Previously, environment variables were templated into a ConfigMap which was referenced through an `envFrom` in the deployment. Unfortunately, Kubernetes does not restart deployments on changes to their referenced ConfigMaps[1], so this indirection means that deployments have to be restarted manually every time a change is made - something that is very easy to forget in an otherwise GitOpsy workflow. [1] kubernetes/kubernetes#22368
Previously, environment variables were templated into a ConfigMap which was referenced through an `envFrom` in the deployment. Unfortunately, Kubernetes does not restart deployments on changes to their referenced ConfigMaps[1], so this indirection means that deployments have to be restarted manually every time a change is made - something that is very easy to forget in an otherwise GitOpsy workflow. [1] kubernetes/kubernetes#22368
Previously, environment variables were templated into a ConfigMap which was referenced through an `envFrom` in the deployment. Unfortunately, Kubernetes does not restart deployments on changes to their referenced ConfigMaps[1], so this indirection means that deployments have to be restarted manually every time a change is made - something that is very easy to forget in an otherwise GitOpsy workflow. [1] kubernetes/kubernetes#22368
Previously, environment variables were templated into a ConfigMap which was referenced through an `envFrom` in the deployment. Unfortunately, Kubernetes does not restart deployments on changes to their referenced ConfigMaps[1], so this indirection means that deployments have to be restarted manually every time a change is made - something that is very easy to forget in an otherwise GitOpsy workflow. [1] kubernetes/kubernetes#22368
Previously, environment variables were templated into a ConfigMap which was referenced through an `envFrom` in the deployment. Unfortunately, Kubernetes does not restart deployments on changes to their referenced ConfigMaps[1], so this indirection means that deployments have to be restarted manually every time a change is made - something that is very easy to forget in an otherwise GitOpsy workflow. [1] kubernetes/kubernetes#22368
currently, pods in the sematic server deployment don't restart when the configmap associated with them changes. on every helm upgrade, helm _does_ rerun the migration pod with the updated configmap values, but the deployment pods themselves stick around unchanged. this is a known issue in k8s: kubernetes/kubernetes#22368 to remove any confusion here, we take a checksum of all of the values in the helm chart and apply the checksum as an annotation on the pods. with this, k8s will forcibly refresh the pods on _any_ change to the helm values. while that is a bit of overkill (not all changes technically require the pods to restart), it is far less error-prone than the status quo. also documented as a helm tip here: https://helm.sh/docs/howto/charts_tips_and_tricks/#automatically-roll-deployments
To do a rolling update of a ConfigMap, the user needs to create a new ConfigMap, update a Deployment to refer to it, and delete the old ConfigMap once no pods are using it. This is similar to the orchestration Deployment does for ReplicaSets.
One solution could be to add a ConfigMap template to Deployment and do the management there.
Another could be to support garbage collection of unused ConfigMaps, which is the hard part. That would be useful for Secrets and maybe other objects, also.
cc @kubernetes/sig-apps-feature-requests
The text was updated successfully, but these errors were encountered: