Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

helm upgrade --install no longer works #3208

Closed
kbrinnehl opened this issue Nov 28, 2017 · 57 comments · Fixed by #3335
Closed

helm upgrade --install no longer works #3208

kbrinnehl opened this issue Nov 28, 2017 · 57 comments · Fixed by #3335

Comments

@kbrinnehl
Copy link

As of helm v2.7.1, after updating tiller, running helm upgrade --install flag no longer works. The following error is displayed: Error: UPGRADE FAILED: "${RELEASE}" has no deployed releases. Downgrading to v2.7.0 or v2.6.2 does not produce the error.

@kbrinnehl kbrinnehl changed the title helm update --install no longer works helm upgrade --install no longer works Nov 28, 2017
@tcolgate
Copy link

tcolgate commented Nov 30, 2017

I thought I was experiencing the same problem, but it turned out I just had an old delete (but not purged), release hanging around. check helm list -a , and if your release is there, helm delete --purge releasename. helm upgrade -i is working successfully on 2.7.2 for me.

@bacongobbler
Copy link
Member

bacongobbler commented Nov 30, 2017

This is a side-effect of fixing issues around upgrading releases that were in a bad state. #3097 was the PR that fixed this issue. Is there an edge case here that we failed to catch?

Check helm list -a as @tcolgate mentioned, perhaps also explaining how to reproduce it would also be helpful to determine if it's an uncaught edge case or a bug.

@TD-4242
Copy link

TD-4242 commented Nov 30, 2017

Also having the same problem, along with duplicate release names:

$>helm ls -a|grep ingress
nginx-ingress            	9       	Thu Nov 30 11:33:06 2017	FAILED  	nginx-ingress-0.8.2        	kube-ingress
nginx-ingress            	11      	Thu Nov 30 11:37:58 2017	FAILED  	nginx-ingress-0.8.2        	kube-ingress
nginx-ingress            	12      	Thu Nov 30 11:38:50 2017	FAILED  	nginx-ingress-0.8.2        	kube-ingress
nginx-ingress            	8       	Thu Nov 30 11:31:27 2017	FAILED  	nginx-ingress-0.8.2        	kube-ingress
nginx-ingress            	10      	Thu Nov 30 11:33:53 2017	FAILED  	nginx-ingress-0.8.2        	kube-ingress
$>helm diff nginx-ingress ./nginx-ingress
Error: "nginx-ingress" has no deployed releases

@bacongobbler
Copy link
Member

When you were upgrading, what message was displayed?

@TD-4242
Copy link

TD-4242 commented Nov 30, 2017

same error as the diff above, but an install would say it was already installed.

@bacongobbler
Copy link
Member

I mean in the previous upgrade attempts that ended up in a FAILED status. I want to know how we get into the situation where all releases are in a failed state.

@TD-4242
Copy link

TD-4242 commented Dec 1, 2017

Ohh, the duplicate release name deployments? That I'm not sure, I get it quite often. Sometimes they are all in a DEPLOYED state, sometimes a mix of FAILED and DEPLOYED. We use a CI/CD Jenkins server that constantly updates every PR merge so we do several helm upgrade's a day, typically only having a new container tag. Usually the duplicates are just annoying when looking at whats deployed, this was the first time we had a hard issue with them, and normally we don't upgrade the ingress controller as we were in this case.

@bcorijn
Copy link

bcorijn commented Dec 5, 2017

I seem to have ended up with something similar, I see a few duplicates in my releases lists:

$ helm ls
NAME                      REVISION    UPDATED                     STATUS      CHART                           NAMESPACE
.....
front-prod                180         Tue Dec  5 17:28:22 2017    DEPLOYED    front-1                         prod
front-prod                90          Wed Sep 13 14:36:06 2017    DEPLOYED    front-1                         prod 
...

All of them seem to be in a DEPLOYED state, but it could well be that a previous upgrade failed at some point, as I have hit that bug several times. I am still on K8S 1.7, so have not updated to helm 2.7 yet.

@s4nch3z
Copy link

s4nch3z commented Dec 13, 2017

Same issue, can't upgrade over FAILED deploy

@aelbarkani
Copy link

Same here using 2.7.2. The first attempt of a release was failed. Then when I tried an upgrade --install I've got the error "Error: UPGRADE FAILED: "${RELEASE}" has no deployed releases".

@winjer
Copy link

winjer commented Dec 17, 2017

Same problem here with 2.7.2, helm upgrade --install fails with:

Error: UPGRADE FAILED: "APPNAME" has no deployed releases

@winjer
Copy link

winjer commented Dec 17, 2017

If the release is entirely purged with helm del --purge APPNAME then a subsequent upgrade --install works ok.

@prein
Copy link

prein commented Dec 18, 2017

I'm experiencing the same problem. Combined with #3134 that leaves no option for automated idempotent deployments without some scripting to workaround.

@winjer just tried deleting with --purge and for me it didn't work although the error changed
/ # helm upgrade foo /charts/foo/ -i --wait
Error: UPGRADE FAILED: "foo" has no deployed releases
/ # helm delete --purge foo
release "foo" deleted
/ # helm upgrade foo /charts/foo/ -i --wait
Release "foo" does not exist. Installing it now.
Error: release foo failed: deployments.extensions "foo-foo-some-service-name" already exists

@tcolgate
Copy link

tcolgate commented Dec 18, 2017

@prein This is because you have a service that is not "owner" by helm, but already exists in the cluster. The behaviour you are experiencing seems correct to me. The deploy cannot succeed because helm would have to "take ownership" of an API object that it did not own before.

It does make sense to be able to upgrade a FAILED release, if the new manifest is actually correct and doesn't content with any other resources in the cluster.

@pierreozoux
Copy link
Contributor

I'm also seeing this behavior on a release called content:

helm upgrade --install --wait --timeout 300 -f ./helm/env/staging.yaml --set image.tag=xxx --namespace=content content ./helm/content
Error: UPGRADE FAILED: no resource with the name "content-content" found
helm list | grep content
content                        	60      	Mon Dec 25 06:02:38 2017	DEPLOYED	content-0.1.0                	content           
content                        	12      	Tue Oct 10 00:02:24 2017	DEPLOYED	content-0.1.0                	content           
content                        	37      	Tue Dec 12 00:42:42 2017	DEPLOYED	content-0.1.0                	content           
content                        	4       	Sun Oct  8 05:58:44 2017	DEPLOYED	k8s-0.1.0                    	content           
content                        	1       	Sat Oct  7 22:29:24 2017	DEPLOYED	k8s-0.1.0                    	content           
content                        	61      	Mon Jan  1 06:39:21 2018	FAILED  	content-0.1.0                	content           
content                        	62      	Mon Jan  1 06:50:41 2018	FAILED  	content-0.1.0                	content           
content                        	63      	Tue Jan  2 17:05:22 2018	FAILED  	content-0.1.0                	content           

I will have to delete this to be able to continue to deploy, let me know if there is anything I can do to help debug this.
(I think we should rename the issue, as it is more about the duplicates?)
(we also run 2.7.2)

@pierreozoux
Copy link
Contributor

I actually have another duplicate release on my cluster, if you have any command for me to run to help debug that? Let me know!

@rcorre
Copy link

rcorre commented Jan 8, 2018

just upgraded to tiller 2.7.2 and we're seeing the same thing. helm delete RELEASE_NAME followed by helm upgrade RELEASE_NAME . fails with Error: UPGRADE FAILED: "RELEASE_NAME" has no deployed releases. upgrade is the intended way to restore a deleted (but not purged) release, correct?

@rcorre
Copy link

rcorre commented Jan 8, 2018

Looks like you can restore the release by rolling back to the deleted version.

adamreese added a commit to adamreese/helm that referenced this issue Jan 11, 2018
`helm list` should only list latest release

fixes helm#3208
adamreese added a commit that referenced this issue Jan 12, 2018
`helm list` should only list latest release

fixes #3208
@ptagr
Copy link

ptagr commented Jan 18, 2018

seeing the same issue with v2.7.2 , fails when there are no previous successfully deployed releases

@stealthybox
Copy link
Contributor

stealthybox commented Jan 25, 2018

Also seeing two potential versions of this issue:


in CI:

+ helm upgrade --install --wait api-feature-persistent-data . --values -
+ cat
WARNING: Namespace doesn't match with previous. Release will be deployed to default
2018/01/25 00:19:07 warning: destination for annotations is a table. Ignoring non-table value <nil>
2018/01/25 00:19:07 warning: destination for annotations is a table. Ignoring non-table value <nil>
2018/01/25 00:19:07 warning: destination for annotations is a table. Ignoring non-table value <nil>
2018/01/25 00:19:07 warning: destination for annotations is a table. Ignoring non-table value <nil>
2018/01/25 00:19:07 warning: destination for annotations is a table. Ignoring non-table value <nil>
2018/01/25 00:19:07 warning: destination for annotations is a table. Ignoring non-table value <nil>
2018/01/25 00:19:07 warning: destination for annotations is a table. Ignoring non-table value <nil>
2018/01/25 00:19:07 warning: destination for annotations is a table. Ignoring non-table value <nil>
2018/01/25 00:19:07 warning: destination for annotations is a table. Ignoring non-table value <nil>
2018/01/25 00:19:07 warning: destination for annotations is a table. Ignoring non-table value <nil>
2018/01/25 00:19:07 warning: destination for annotations is a table. Ignoring non-table value <nil>
2018/01/25 00:19:07 warning: destination for annotations is a table. Ignoring non-table value <nil>
Error: UPGRADE FAILED: "api-feature-persistent-data" has no deployed releases

on my local machine:

( both at my OSX bash and in a gcloud/kubectl container )

+ helm upgrade --install --wait api-feature-persistent-data . --values -
+ cat
WARNING: Namespace doesn't match with previous. Release will be deployed to default
2018/01/25 00:19:07 warning: destination for annotations is a table. Ignoring non-table value <nil>
2018/01/25 00:19:07 warning: destination for annotations is a table. Ignoring non-table value <nil>
2018/01/25 00:19:07 warning: destination for annotations is a table. Ignoring non-table value <nil>
2018/01/25 00:19:07 warning: destination for annotations is a table. Ignoring non-table value <nil>
2018/01/25 00:19:07 warning: destination for annotations is a table. Ignoring non-table value <nil>
2018/01/25 00:19:07 warning: destination for annotations is a table. Ignoring non-table value <nil>
2018/01/25 00:19:07 warning: destination for annotations is a table. Ignoring non-table value <nil>
2018/01/25 00:19:07 warning: destination for annotations is a table. Ignoring non-table value <nil>
2018/01/25 00:19:07 warning: destination for annotations is a table. Ignoring non-table value <nil>
2018/01/25 00:19:07 warning: destination for annotations is a table. Ignoring non-table value <nil>
2018/01/25 00:19:07 warning: destination for annotations is a table. Ignoring non-table value <nil>
2018/01/25 00:19:07 warning: destination for annotations is a table. Ignoring non-table value <nil>
Error: UPGRADE FAILED: no PersistentVolumeClaim with the name "api-feature-persistent-data-db" found

The warnings are normal for our chart.
The errors are interesting because one of our subcharts has a pvc.yaml in it.

helm del --purge <release> does mitigate the problem.
This does make our CI pipeline difficult to upgrade.

@peay
Copy link
Contributor

peay commented Feb 1, 2018

@adamreese what is the final resolution for this issue? We're on 2.8 and we still cannot upgrade a previously FAILED release with the change to helm list.

In particular, we're running into the following type of issues:

  • deploy a release, OK
  • upgrade --install --wait, but maybe there's a small bug and --wait doesn't succeed (e.g., liveness probes never make it up)
  • after fixing the chart, upgrade --install --wait fails with xxx has no deployed releases

Deleting/purging is not desirable or acceptable here: the release may have multiple resources (pods, load balancers) that are not affected by the one resource that won't go up. In previous Helm versions, upgrade --install allowed us to only patch the change that broke the full release without having to remove all the resources.

Helm is the owner of all resources involved at all times here -- the resource is only marked FAILED because --wait didn't succeed to wait for all resources to be in a good state. I assume the same will happen if a pod is a bit too slow to start and in many similar cases.

@bacongobbler
Copy link
Member

@peay see #3353 for follow-up discussion.

@peay
Copy link
Contributor

peay commented Feb 1, 2018

Thanks -- that clears it up. Actually realized we were only hitting it when we had no successful release to begin with. In that case, purge is a fine workaround.

@stealthybox
Copy link
Contributor

@MythicManiac FWIW:
I still have our teams pinned on v2.7.0 because of this behavior.
We don't seem to have any issues with resources upgrading and deleting when they are supposed to using helm upgrade --install with this version.

@KIVagant
Copy link

KIVagant commented Oct 5, 2018

We also have this problem. It's very annoying that I need to delete K8s services and related AWS ELBs because helm has no deployed releases. The package manager is great but this issue leads to production downtime which is not good.

@tcolgate
Copy link

tcolgate commented Oct 5, 2018 via email

@KIVagant
Copy link

KIVagant commented Oct 5, 2018

@tcolgate, thank you! I just fixed another problem (#2426 (comment)) with your workaround and will try to test it for exist ELBs when I am deploying a new chart next week over existing resources.

@KIVagant
Copy link

KIVagant commented Oct 8, 2018

Doing a rollback to the original failed release can work.

@tcolgate, I just tested and no, it doesn't work in the case of first deploy.


$ helm upgrade --wait --timeout 900 --install myproject charts/myproject/myproject-1.1.1.tgz
14:53:18 Release "myproject" does not exist. Installing it now.
14:53:18 Error: release myproject failed: deployments.apps "myproject" already exists

$ helm list
NAME        	REVISION	UPDATED                 	STATUS  	CHART           	APP VERSION	NAMESPACE
myproject    	1       	Mon Oct  8 11:53:18 2018	FAILED  	myproject-1.1.1  	           	default

$ helm rollback myproject 1
Error: "myproject" has no deployed releases

@KIVagant
Copy link

KIVagant commented Oct 8, 2018

I am curious, if Helm can't deploy a chart over existing resources so why helm delete causes deleting exactly these resources?

@KIVagant
Copy link

KIVagant commented Oct 8, 2018

@thomastaylor312, we faced this issue as well as #2426 (up: I found the real root cause for 2426) with helm 2.11.0. Do you think they should be reopened?

@krishofmans
Copy link

I found this thread after a Error: UPGRADE FAILED: "xxx-service" has no deployed releases
While it was visible from a helm ls -a.

I decided to see if it was an issue because of an incorrect --set value, and --debug --dry-run --force actually STILL deleted my running pod ... my expectation was that a dry run action would NEVER modify cluster resources.

It did work though, and the service could be redeployed afterwards, but we experienced downtime.

@stealthybox
Copy link
Contributor

my expectation was that a dry run action would NEVER modify cluster resources.

This is a fair expectation -- we should make that... not happen

@bacongobbler
Copy link
Member

I believe that was patched in #4402 but it'd be nice if someone were to check. Sorry about that!

@MohamedHedi
Copy link

same problem after upgrade to 2.11.0

@stealthybox
Copy link
Contributor

Cross post:
FairwindsOps/reckoner#48 (comment)

@gerbsen
Copy link

gerbsen commented Feb 5, 2019

Why is this closed? Do we have a proper way to handle this now?

@stealthybox
Copy link
Contributor

@gerbsen, there isn't a way around this with current versions of Helm that is non-destructive.
We still use Helm 2.7.0 for everything in my org. It is a very old version that has other bugs, but it does not have this issue.

@notque
Copy link

notque commented Feb 13, 2019

Just had helm upgrade --install --force do a delete --purge and destroy my pvc/pv without telling me (on recycling). Had several failed releases, so it was in a state it was running in kubernetes, but helm thought there were no running releases. Not good times at all.

@alex88
Copy link

alex88 commented Feb 26, 2019

@notque after losing all grafana dashboard twice I've started doing backups before doing any kind of upgrade, having this kind of risk removes all the benefits of using helm

@yehee
Copy link

yehee commented Nov 4, 2019

For those who are seeking for help, the issue was gone for me after upgrading helm to v.2.15.2

@ScubaDrew
Copy link

Still seeing this issue on 2.16.0

@nick4fake
Copy link

Why is it still closed? 2.16.1 is still affected

@alex88
Copy link

alex88 commented Dec 10, 2019

@nick4fake I think it's a duplicate of #5595

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.