Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Facilitate ConfigMap rollouts / management #22368

Open
bgrant0607 opened this issue Mar 2, 2016 · 257 comments
Open

Facilitate ConfigMap rollouts / management #22368

bgrant0607 opened this issue Mar 2, 2016 · 257 comments
Labels
area/app-lifecycle area/configmap-api area/declarative-configuration lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/apps Categorizes an issue or PR as relevant to SIG Apps. sig/service-catalog Categorizes an issue or PR as relevant to SIG Service Catalog.
Projects

Comments

@bgrant0607
Copy link
Member

bgrant0607 commented Mar 2, 2016

To do a rolling update of a ConfigMap, the user needs to create a new ConfigMap, update a Deployment to refer to it, and delete the old ConfigMap once no pods are using it. This is similar to the orchestration Deployment does for ReplicaSets.

One solution could be to add a ConfigMap template to Deployment and do the management there.

Another could be to support garbage collection of unused ConfigMaps, which is the hard part. That would be useful for Secrets and maybe other objects, also.

cc @kubernetes/sig-apps-feature-requests

@bgrant0607 bgrant0607 added priority/backlog Higher priority than priority/awaiting-more-evidence. area/app-lifecycle team/ux labels Mar 2, 2016
@bgrant0607
Copy link
Member Author

cc @pmorie

@thockin
Copy link
Member

thockin commented Mar 23, 2016

This is one approach. I still want to write a demo, using the live-update
feature of configmap volumes to do rollouts without restarts. It's a
little scarier, but I do think it's useful.
On Mar 2, 2016 9:26 AM, "Brian Grant" notifications@github.com wrote:

To do a rolling update of a ConfigMap, the user needs to create a new
ConfigMap, update a Deployment to refer to it, and delete the old ConfigMap
once no pods are using it. This is similar to the orchestration Deployment
does for ReplicaSets.

One solution could be to add a ConfigMap template to Deployment and do the
management there.

Another could be to support garbage collection of unused ConfigMaps, which
is the hard part. That would be useful for Secrets and maybe other objects,
also.

cc @kubernetes/sig-config
https://github.com/orgs/kubernetes/teams/sig-config


Reply to this email directly or view it on GitHub
#22368.

@bgrant0607 bgrant0607 added this to the next-candidate milestone Mar 23, 2016
@bgrant0607
Copy link
Member Author

@thockin Live update is a different use case than what's discussed here.

@therc
Copy link
Member

therc commented Mar 23, 2016

I think live updates without restarts might fall under my issue, #20200.

@bgrant0607
Copy link
Member Author

@caesarxuchao @lavalamp: We should consider this issue as part of implementing cascading deletion.

@bgrant0607
Copy link
Member Author

Ref #9043 re. in-place rolling updates.

@lavalamp
Copy link
Member

Yeah I think it should be trivial to set a parent for a config map so it automatically gets cleaned up.

(Why not just add a configmap template section to deployment anyway? Seems like a super common thing people will want to do.)

@caesarxuchao
Copy link
Member

@lavalamp, I guess you mean we can set replicas sets as the parent of a config map, and delete the config map when all the replica sets are deleted?

@bgrant0607
Copy link
Member Author

@caesarxuchao Yes

@bgrant0607
Copy link
Member Author

@0xmichalis
Copy link
Contributor

Recent discussion:
https://groups.google.com/forum/#!topic/google-containers/-em3So0KBnA

Thinking out loud: In OpenShift we have the concept of triggers. For example when an image tag is referenced by a DeploymentConfig and there is a new image for that tag, we detect it via a controller loop and update the DeploymentConfig by resolving the tag to the full spec of the image (thus triggering a new deployment since it's a template change). Could we possibly do something similar here? A controller loop watches for configmap changes and triggers a new deployment (we would also need to support redeployments of the same thing since there is no actual template change involved - maybe by adding an annotation to the podtemplate?)

@bgrant0607
Copy link
Member Author

Fundamentally, there need to be multiple ConfigMap objects if we're going to have some pods referring to the new one and others referring to the old one(s), just as with ReplicaSets.

@rata
Copy link
Member

rata commented Mar 30, 2016

On Wed, Mar 30, 2016 at 01:56:24AM -0700, Michail Kargakis wrote:

Recent discussion:
https://groups.google.com/forum/#!topic/google-containers/-em3So0KBnA

Thinking out loud: In OpenShift we have the concept of triggers. For example when an image tag is referenced by a DeploymentConfig and there is a new image for that tag, we detect it via a controller loop and update the DeploymentConfig by resolving the tag to the full spec of the image (thus triggering a new deployment since it's a template change). Could we possibly do something similar here? A controller loop watches for configmap changes and triggers a new deployment (we would also need to support redeployments of the same thing since there is no actual template change involved - maybe by adding an annotation to the podtemplate?)

(I posted the original mail in the thread on the google group)

I think making a deployment is the best way, too. Because you can have a syntax
error or whatever in the config and the new nodes hopefully won't start and the
deployment can be rolled back (or, in not common cases I suspect, even do a
canary deployment of a config change).

But I'm not sure if the configmap should be updated, as you propose, or if it
should be a different one (for kube internals, at least). As, in case you do a
config update with a syntax error a pod will be taken down during the
deployment, a new up that fail and now there is no easy way to rollback because
the configmap has been updated. So, probably you need to update again the
configmap and do another deploy. If it is a different configmap, IIUC, the
rollback can be done easily.

@rata
Copy link
Member

rata commented Apr 1, 2016

Sorry to bother again, but can this be tagged for milestone v1.3 and, maybe, a lower priority?

@rata
Copy link
Member

rata commented Apr 6, 2016

@bgrant0607 ping?

@thockin
Copy link
Member

thockin commented Apr 6, 2016

What work is needed, if we agree deployment is the best path ?

On Tue, Apr 5, 2016 at 5:10 PM, rata notifications@github.com wrote:

@bgrant0607 https://github.com/bgrant0607 ping?


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
#22368 (comment)

@bgrant0607 bgrant0607 modified the milestones: v1.3, next-candidate Apr 6, 2016
@bgrant0607
Copy link
Member Author

@rata Sorry, I get zillions of notifications every day. Are you volunteering to help with the implementation?

@thockin We need to ensure that the parent/owner on ConfigMap is set to the referencing ReplicaSet(s) when we implement cascading deletion / GC.

@rata
Copy link
Member

rata commented Apr 6, 2016

@bgrant0607 no problem! I can help, yes. Not sure I can get time from my work and I'm quite busy with university, but I'd love to help and probably can find some time. I've never really dealt with kube code (I did a very simple patch only), but I'd love to do it :)

Also, I guess that a ConfigMap can have several owners, right? I think right now it can be used in several RSs and that should be taken into account when doing the cascade deletion/GC (although maybe is something obvious).

Any pointers on where to start? Is someone willing to help with this?

PS: @bgrant0607 sorry the delay, it was midnight here when I got your answer :)

@pmorie
Copy link
Member

pmorie commented Apr 6, 2016

@rata

But I'm not sure if the configmap should be updated, as you propose, or if it should be a different one (for kube internals, at least).

If we manage kube internals with deployments, we have to find the right thing to do for both user-consumed configs and internals.

Also, I guess that a ConfigMap can have several owners, right?

@bgrant0607 I also have the same Q here -- I think we will need to reference-count configmaps / secrets since they can be referred to from pods owned by multiple different controllers.

I think right now it can be used in several RSs and that should be taken into account when doing the cascade deletion/GC (although maybe is something obvious).

Cascading deletion has its own issues: #23656 and #19054

@caesarxuchao
Copy link
Member

@rata, I'm working on cascading deletion and am putting together a PR that adds the necessary API, including the "OwnerReferences". I'll cc you there.

@rata
Copy link
Member

rata commented Apr 6, 2016

@caesarxuchao thanks!

@rata
Copy link
Member

rata commented Apr 6, 2016

On Wed, Apr 06, 2016 at 09:37:25AM -0700, Paul Morie wrote:

@rata

But I'm not sure if the configmap should be updated, as you propose, or if it should be a different one (for kube internals, at least).

If we manage kube internals with deployments, we have to find the right thing to do for both user-consumed configs and internals.

Sure, but I guess the same should probably work for boths, right?

I imagine for example the "internals" configMaps to use a name like
-v<kube-version/commit hash>.

This way, when you upgrade the configmap will became orphan and it should be
deleted, right? Or am I missing something?

I think this can work for boths

I think right now it can be used in several RSs and that should be taken into account when doing the cascade deletion/GC (although maybe is something obvious).

Cascading deletion has its own issues: #23656 and #19054

Ohh, thanks!

@bgrant0607
Copy link
Member Author

@caesarxuchao @rata We'll likely need a custom mechanism in Deployment to ensure that a referenced ConfigMap is owned by the generated ReplicaSets that reference it.

@dudicoco
Copy link

dudicoco commented Feb 7, 2023

@Fran-Rg https://github.com/stakater/Reloader/ is the tool you're looking for, but as @remram44 stated it is not a safe mechanism for rolling updates as it is best practice to keep your pods immutable.

We're using a hash with helm and it works great, you don't even need to think of this problem anymore after you've implemented this solution within the chart.

@hedgss
Copy link

hedgss commented Feb 7, 2023

@dudicoco
We also using it in our k8s clusters. It works properly. It is a good workaround.

@adrian-gierakowski
Copy link

kustomize (built into kubectl) will also hash the config map, add the hash to it’s name and replace references to it across the project. Spinnaker has been doing the same thing for years.

@sbrunner
Copy link

sbrunner commented Feb 7, 2023

The hash is not always possible, we sometimes use a ConfigMap or Secret created out of the chart, currently we use The reloader operator as a workaround.

@cccsss01
Copy link

@Fran-Rg https://github.com/stakater/Reloader/ is the tool you're looking for, but as @remram44 stated it is not a safe mechanism for rolling updates as it is best practice to keep your pods immutable.

We're using a hash with helm and it works great, you don't even need to think of this problem anymore after you've implemented this solution within the chart.

@dudicoco any chance you'd write a blog about it? I'm working on implementing this as we speak it's been a rough journey.

@mubarak-j
Copy link

You'd think being the most upvoted issue (by far) would mean it has higher prioritization to bring it to the finish line. We have to give credit to @kfox1111 for driving associated KEP since 2019 💪🏼

@cccsss01 you can find the examples in helm docs or here

@dims
Copy link
Member

dims commented May 24, 2023

higher prioritization

for who @mubarak-j ? We are all individuals here, There is a KEP process, someone has to drive it. If it is a high priority for someone, they need to step up and run with it. please don't put the responsibility to someone else.

@sftim
Copy link
Contributor

sftim commented May 24, 2023

BTW, there's a statistical effect here. Improvements that have been more popular may have garnered fewer upvotes, because they were so popular that people made time to work on them.

The length of time that this has been important but not staffed has allowed it to receive reactions that actual shipped features didn't attract.

If anyone wants to contribute on defining and implementing this enhancement, we've got guides at https://k8s.dev/ that are aimed at helping you learn how.

PS the KEP for this is kubernetes/enhancements#3704

@prashil-g
Copy link

when is this feature planned to be released?

@hedgss
Copy link

hedgss commented Jun 1, 2023

This issue is already open for 7 years.
I'd NOT expect that it will be implemented soon.
You can use https://github.com/stakater/Reloader/ as a workaround.

magentalabs-serviceagent-1 pushed a commit to OS2mo/os2mo-helm-chart that referenced this issue Jun 28, 2023
Previously, environment variables were templated into a ConfigMap which
was referenced through an `envFrom` in the deployment. Unfortunately,
Kubernetes does not restart deployments on changes to their referenced
ConfigMaps[1], so this indirection means that deployments have to be
restarted manually every time a change is made - something that is very
easy to forget in an otherwise GitOpsy workflow.

[1] kubernetes/kubernetes#22368
magentalabs-serviceagent-1 pushed a commit to OS2mo/os2mo-helm-chart that referenced this issue Jun 28, 2023
Previously, environment variables were templated into a ConfigMap which
was referenced through an `envFrom` in the deployment. Unfortunately,
Kubernetes does not restart deployments on changes to their referenced
ConfigMaps[1], so this indirection means that deployments have to be
restarted manually every time a change is made - something that is very
easy to forget in an otherwise GitOpsy workflow.

[1] kubernetes/kubernetes#22368
magentalabs-serviceagent-1 pushed a commit to OS2mo/os2mo-helm-chart that referenced this issue Jun 28, 2023
Previously, environment variables were templated into a ConfigMap which
was referenced through an `envFrom` in the deployment. Unfortunately,
Kubernetes does not restart deployments on changes to their referenced
ConfigMaps[1], so this indirection means that deployments have to be
restarted manually every time a change is made - something that is very
easy to forget in an otherwise GitOpsy workflow.

[1] kubernetes/kubernetes#22368
magentalabs-serviceagent-1 pushed a commit to OS2mo/os2mo-helm-chart that referenced this issue Jun 28, 2023
Previously, environment variables were templated into a ConfigMap which
was referenced through an `envFrom` in the deployment. Unfortunately,
Kubernetes does not restart deployments on changes to their referenced
ConfigMaps[1], so this indirection means that deployments have to be
restarted manually every time a change is made - something that is very
easy to forget in an otherwise GitOpsy workflow.

[1] kubernetes/kubernetes#22368
magentalabs-serviceagent-1 pushed a commit to OS2mo/os2mo-helm-chart that referenced this issue Jun 28, 2023
Previously, environment variables were templated into a ConfigMap which
was referenced through an `envFrom` in the deployment. Unfortunately,
Kubernetes does not restart deployments on changes to their referenced
ConfigMaps[1], so this indirection means that deployments have to be
restarted manually every time a change is made - something that is very
easy to forget in an otherwise GitOpsy workflow.

The `environment_containing_json` does not seem to serve any purpose,
but was probably copied from an old method of templating the ConfigMap
using a for-loop instead of `toYaml`.

[1] kubernetes/kubernetes#22368
magentalabs-serviceagent-1 pushed a commit to OS2mo/os2mo-helm-chart that referenced this issue Jun 28, 2023
Previously, environment variables were templated into a ConfigMap which
was referenced through an `envFrom` in the deployment. Unfortunately,
Kubernetes does not restart deployments on changes to their referenced
ConfigMaps[1], so this indirection means that deployments have to be
restarted manually every time a change is made - something that is very
easy to forget in an otherwise GitOpsy workflow.

The `environment_containing_json` does not seem to serve any purpose,
but was probably copied from an old method of templating the ConfigMap
using a for-loop instead of `toYaml`.

[1] kubernetes/kubernetes#22368
magentalabs-serviceagent-1 pushed a commit to OS2mo/os2mo-helm-chart that referenced this issue Jun 28, 2023
Previously, environment variables were templated into a ConfigMap which
was referenced through an `envFrom` in the deployment. Unfortunately,
Kubernetes does not restart deployments on changes to their referenced
ConfigMaps[1], so this indirection means that deployments have to be
restarted manually every time a change is made - something that is very
easy to forget in an otherwise GitOpsy workflow.

The `environment_containing_json` does not seem to serve any purpose,
but was probably copied from an old method of templating the ConfigMap
using a for-loop instead of `toYaml`.

[1] kubernetes/kubernetes#22368
magentalabs-serviceagent-1 pushed a commit to OS2mo/os2mo-helm-chart that referenced this issue Jun 28, 2023
Previously, environment variables were templated into a ConfigMap which
was referenced through an `envFrom` in the deployment. Unfortunately,
Kubernetes does not restart deployments on changes to their referenced
ConfigMaps[1], so this indirection means that deployments have to be
restarted manually every time a change is made - something that is very
easy to forget in an otherwise GitOpsy workflow.

[1] kubernetes/kubernetes#22368
magentalabs-serviceagent-1 pushed a commit to OS2mo/os2mo-helm-chart that referenced this issue Jun 28, 2023
Previously, environment variables were templated into a ConfigMap which
was referenced through an `envFrom` in the deployment. Unfortunately,
Kubernetes does not restart deployments on changes to their referenced
ConfigMaps[1], so this indirection means that deployments have to be
restarted manually every time a change is made - something that is very
easy to forget in an otherwise GitOpsy workflow.

[1] kubernetes/kubernetes#22368
magentalabs-serviceagent-1 pushed a commit to OS2mo/os2mo-helm-chart that referenced this issue Jun 28, 2023
Previously, environment variables were templated into a ConfigMap which
was referenced through an `envFrom` in the deployment. Unfortunately,
Kubernetes does not restart deployments on changes to their referenced
ConfigMaps[1], so this indirection means that deployments have to be
restarted manually every time a change is made - something that is very
easy to forget in an otherwise GitOpsy workflow.

[1] kubernetes/kubernetes#22368
magentalabs-serviceagent-1 pushed a commit to OS2mo/os2mo-helm-chart that referenced this issue Jun 28, 2023
Previously, environment variables were templated into a ConfigMap which
was referenced through an `envFrom` in the deployment. Unfortunately,
Kubernetes does not restart deployments on changes to their referenced
ConfigMaps[1], so this indirection means that deployments have to be
restarted manually every time a change is made - something that is very
easy to forget in an otherwise GitOpsy workflow.

[1] kubernetes/kubernetes#22368
magentalabs-serviceagent-1 pushed a commit to OS2mo/os2mo-helm-chart that referenced this issue Jun 28, 2023
Previously, environment variables were templated into a ConfigMap which
was referenced through an `envFrom` in the deployment. Unfortunately,
Kubernetes does not restart deployments on changes to their referenced
ConfigMaps[1], so this indirection means that deployments have to be
restarted manually every time a change is made - something that is very
easy to forget in an otherwise GitOpsy workflow.

[1] kubernetes/kubernetes#22368
magentalabs-serviceagent-1 pushed a commit to OS2mo/os2mo-helm-chart that referenced this issue Jun 28, 2023
Previously, environment variables were templated into a ConfigMap which
was referenced through an `envFrom` in the deployment. Unfortunately,
Kubernetes does not restart deployments on changes to their referenced
ConfigMaps[1], so this indirection means that deployments have to be
restarted manually every time a change is made - something that is very
easy to forget in an otherwise GitOpsy workflow.

[1] kubernetes/kubernetes#22368
github-merge-queue bot pushed a commit to sematic-ai/sematic that referenced this issue Nov 8, 2023
currently, pods in the sematic server deployment don't restart when the
configmap associated with them changes. on every helm upgrade, helm
_does_ rerun the migration pod with the updated configmap values, but
the deployment pods themselves stick around unchanged. this is a known
issue in k8s: kubernetes/kubernetes#22368

to remove any confusion here, we take a checksum of all of the values in
the helm chart and apply the checksum as an annotation on the pods. with
this, k8s will forcibly refresh the pods on _any_ change to the helm
values. while that is a bit of overkill (not all changes technically
require the pods to restart), it is far less error-prone than the status
quo.

also documented as a helm tip here:
https://helm.sh/docs/howto/charts_tips_and_tricks/#automatically-roll-deployments
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/app-lifecycle area/configmap-api area/declarative-configuration lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/apps Categorizes an issue or PR as relevant to SIG Apps. sig/service-catalog Categorizes an issue or PR as relevant to SIG Service Catalog.
Projects
Status: Needs Triage
Workloads
  
In Progress
Development

No branches or pull requests