Force pod restarts on config map changes #1094
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
currently, pods in the sematic server deployment don't restart when the configmap associated with them changes. on every helm upgrade, helm does rerun the migration pod with the updated configmap values, but the deployment pods themselves stick around unchanged. this is a known issue in k8s: kubernetes/kubernetes#22368
to remove any confusion here, we take a checksum of all of the values in the helm chart and apply the checksum as an annotation on the pods. with this, k8s will forcibly refresh the pods on any change to the helm values. while that is a bit of overkill (not all changes technically require the pods to restart), it is far less error-prone than the status quo.
also documented as a helm tip here: https://helm.sh/docs/howto/charts_tips_and_tricks/#automatically-roll-deployments