Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Helm upgrade fails upgrade ConfigMap #2485

Closed
adam-sandor opened this issue May 23, 2017 · 3 comments
Closed

Helm upgrade fails upgrade ConfigMap #2485

adam-sandor opened this issue May 23, 2017 · 3 comments

Comments

@adam-sandor
Copy link

adam-sandor commented May 23, 2017

I keep having issues with ConfigMaps not getting updated on helm upgrade. It's not a permanent issue so I can't give an exact scenario how to reproduce. I'll try to give as much info as possible. When I'm saying it's not a permanent issue I mean it doesn't always happen when I'm changing ConfigMaps but when it does happen I can run the upgrade any number of times and the value won't change.

What I'm trying to do

I have a ConfigMap that looks like this. I'm trying to change the value of APP_DOMAIN from test.mydomain.com to develop.europa.mydomain.com.

kubectl describe cm jupiter-config
Name:		jupiter-config
Namespace:	develop
Labels:		<none>
Annotations:	kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","data":{"API_URL":"http://node-services","APP_DOMAIN":"test.mydomain.com","LOG_LEVEL":"info",...

Data
====
SHOW_ADS:
----
true
API_URL:
----
http://node-services
APP_DOMAIN:
----
test.mydomain.com
HTTP_DEBUG:
----
true
LOG_LEVEL:
----
info

I run a helm upgrade to change this value to develop.europa.mydomain.com. You can see the debug output from Helm. The upgrade completes successfully but the ConfigMap stays the same.

helm version output

Client: &version.Version{SemVer:"v2.4.1", GitCommit:"46d9ea82e2c925186e1fc620a8320ce1314cbb02", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.4.1", GitCommit:"46d9ea82e2c925186e1fc620a8320ce1314cbb02", GitTreeState:"clean"}

helm upgrade output

REVISION: 61
RELEASED: Tue May 23 15:11:34 2017
CHART: jupiter-0.1.0
USER-SUPPLIED VALUES:
cluster: europa
config:
  API_URL: http://node-services
  APP_DOMAIN: develop.europa.mydomain.com
  HTTP_DEBUG: "true"
  LOG_LEVEL: info
  SHOW_ADS: "true"
resources:
  limits:
    cpu: 500m
    memory: 500Mi
  requests:
    cpu: 100m
    memory: 100Mi
scaling:
  maxReplicas: 4
  minReplicas: 2
version: 1

COMPUTED VALUES:
cluster: europa
config:
  API_URL: http://node-services
  APP_DOMAIN: develop.europa.mydomain.com
  HTTP_DEBUG: "true"
  LOG_LEVEL: info
  SHOW_ADS: "true"
resources:
  limits:
    cpu: 500m
    memory: 500Mi
  requests:
    cpu: 100m
    memory: 100Mi
scaling:
  maxReplicas: 4
  minReplicas: 2
version: 1

HOOKS:
MANIFEST:

---
# Source: jupiter/templates/config.yaml
kind: ConfigMap
apiVersion: v1
metadata:
  name: jupiter-config
data:
  API_URL: http://node-services
  APP_DOMAIN: develop.europa.mydomain.com
  HTTP_DEBUG: "true"
  LOG_LEVEL: info
  SHOW_ADS: "true"
@adamreese
Copy link
Member

It looks like the configmap was modified with kubectl. Helm will diff on the last release rather than what is running in the cluster.

@adam-sandor
Copy link
Author

Yes that is probably the case. Thanks for the quick answer!

@missedone
Copy link

I had the same issue caused few 1 hour to figure out that helm failed to update configmap.
I think this is quite confusing to users, and once the configmap modified with kubectl, it won't update except 1) either update helm configmap values, 2) or delete the configmap and use helm to redeploy.

So could you add a flag, or recognize the --force flag, to not check with last release, but do check with the running one in cluster?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants