New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
UPGRADE FAILED: No resource with the name "" found #1193
Comments
I'm running into a similar issue where I have a chart with bundled dependencies. If I add a new dependency and run a So, if this is installed:
And then a new chart is added as a dependency:
When the release is upgraded with:
|
@devth I'm not able to reproduce this issue on master. Are you still seeing this problem? What version of helm/tiller are you running? Thanks! |
@elementalvoid I was also unable to reproduce the new dependency error on master. Are you still seeing this problem? What version of helm/tiller are you running? Thank you. |
At the time I was on alpha 4. Using alpha 5 and @devth's example I was also unable to reproduce the issue. |
Alright. I'll close this for now. Feel free to file an issue if you see either of these problems again. Thanks again. |
@michelleN thanks! Sorry I haven't had time this week to attempt a repro on master. Looking forward to upgrading soon! |
Same for me when moving a hostPath Deployment/Volume spec to PVC. |
Strange, I am seeing the same behavior trying to upgrade a chart in version 2.7.2 with a new role. Tiller complains that it can't find the role and fails the deployments, even though it really created the role. |
My situation was that I had a new resource, and I deployed the new version of the helm chart with the new resource. That deployment failed b/c I fat fingered some yaml. Well, the new objects were created in kubernetes. I fixed the yaml, and ran the upgrade on my chart again, and voila, the error message that the resource is not found appears. I had to go into kubernetes and remove the new resources (in my case a role and rolebinding) that were created by the failed deployment. After that, the helm check to see if the current object exists fails (https://github.com/kubernetes/helm/blob/7432bdd716c4bc34ad95a85a761c7cee50a74ca3/pkg/kube/client.go#L257) will not succeed, and the resources are created again. Seems like a bug, where maybe new resources for a failed chart should be accounted for? |
Getting similar error while upgrading:
Configmap is created
My configmap: apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "proxy.fullname" . }}-config
labels:
app: {{ template "proxy.name" . }}
chart: {{ template "proxy.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
data:
asd: qwe |
We have the same issue. |
I deleted the whole release and then installed again. Currently it seems to be working.
|
I am also having this issue on
|
This happens frequently with our usage of helm and requires a full |
It is not applicable if you use CI/CD. |
I see the same issue as well when there is a Deployment issue or similar but the secret/cm got created but then Helm loses track of it, refusing to let you do much. I've seen it, rarely tho, happen even on a non-broken release (i.e. it is seen as have gone through) but I have yet to figure out what could cause that. |
We're also able to reproduce this issue (server v2.8.2) when adding resources to existing helm deployments. Having to delete the deployment and redeploy each time a new resource has to be added will be a big problem in production. |
In our case we were adding a configmap to a chart and the chart fails to be upgraded with:
Note: We're using 2.7.2; on later versions this message has changed to include the type of the resource that can't be found. I believe this happens because when helm is determining what has changed it looks for the new configmap resource in the old release, and fails to find it. See https://github.com/kubernetes/helm/blob/master/pkg/kube/client.go#L276-L280 for the code where this error comes from. Tiller logs for the failing upgrade:
|
This problem also arises when changing the I'm changing the name of a Service in a release and it fails to upgrade with:
|
I'd be willing to create a PR to fix this behavior, but I'd like to know what the intended or suggested way of handling this. Even a CLI flag that allows --force to take precedence would be great. |
Agree on the importance. This problem can be weird when you cannot simply delete a deployment. |
I found our issue was because of a failed deploy. Helm doesn't attempt to clean up after a failed deploy, which means things like the new ConfigMap I added above get created but without a reference in the 'prior' deploy. That means when the next deploy occurs, helm finds the resource in k8s and expects it to be referenced in the latest deployed revision (or something; I'm not sure what exact logic it uses to find the 'prior' release) to check what changes there are. It's not in that release, so it cannot find the resource, and fails. This is mainly an issue when developing a chart as a failed deploy puts k8s in a state helm does not properly track. When I figured out this is what was happening I knew I just needed to delete the ConfigMap from k8s and try the deploy again. |
@krishicks Yes, this is one way to repro it. A failed deploy + a never created resource (i.e invalid configmap) can also cause this as well I've noticed, which then leads to a unrecoverable state |
Atomic will not resolve the issueExample chart: https://github.com/distorhead/ex-helm-upgrade-failure
Chart contains 2 deployments --
Say hello to our friend again:
|
@distorhead what was your expected behavior for this scenario? |
Slightly offtop about rollbacks, but anyway. For those people, who want to use rollback, but also do not want rollback to occur immediately after deploy as in |
Thanks for putting this workaround together @bacongobbler - it's essentially what we came to as a process as well. One painful issue here is during complex upgrades many new resources - at times a few dependencies levels deep - may find themselves in this state. I haven't yet found a way to fully enumerate these states in an automatic way leading to situations where one needs to repeatedly fail an upgrade to "search" for all relevant resources. For example, recently a newly added dependency itself had a dependency on a postgresql chart. In order to resolve this issue it was necessary to delete a secret, configmap, service, deployment and pvc - each found the long way 'round. |
You could write a plugin similar to |
@bacongobbler What solution would you recommend to take an application that's deployed as part of release A (for example a larger release made up of several applications) and break it out of release A into its own release (or vice versa) without incurring any downtime (the workaround to delete resources would cause some downtime)... Trying to update a resource via a different release results in the error that's described by this Github issue. |
it sound like this new chart its been installed and replaces the old charts even before a sucessfull deploy. Same thing with a failing upgrade --install. It sould not install if the chart is wrong. |
This is a process I use to recover from this problem (so far it has worked every time without any incident... but be careful anyway):
|
@bacongobbler @michelleN I believe error message should state that "there is a conflict because resource wasn't created by helm and manual intervention is required" and not "not found". Only this small change to the error will improve user experience by a good margin. |
@selslack I would very much in favor of improving the error message 👍 |
@michelleN I've prepared a PR to change the error text: #5460. |
I'm experiencing this issue and I'm in a situation where I'm not sure how to resolve it. I tried all the steps listed by @reneklacan here: #1193 (comment) Unfortunately that didn't work. The only thing that resolves the issue is to delete the resource generating the error message, then However, the next helm upgrade will fail with the same error, and I have to delete the resource again and reupgrade... this isn't sustainable or good. I have two environments I use helm to deploy to as part of our CI process: a QA and a production environment. The QA environment had the same issue so I used However I can't do this for the production environment - I can't just wipe it out and reupgrade, so currently I'm stuck deleting the resource before each deploy. I'm just lucky it's not an import resource. |
@zacharyw what error are you facing at the moment? Can you share any additional info that would help with debugging this? Maybe output of Feel free to send me an email with more info if you don't want to spam this issue with potentially unrelated data (rene (at) klacan (dot) sk). |
Please see #1193 (comment) for a possible diagnosis and workaround, @zacharyw. |
@reneklacan It's the The status of my most recent release (after deleting the offending ingress and allow helm upgrade to recreate it) is
However, if I were to try to upgrade again, it would fail. @bacongobbler Unless I'm misunderstanding I think I already am doing the workaround in that comment: I delete the resource and let it get recreated... the issue is I have to do this every time. |
@reneklacan in #1193 (comment) saved my life. It's a disappointment that Helm fails this way. Deleting things in pretty much any environment is far from ideal. |
It would be great if helm updated it's own database when this kind of error appears, and then retry. |
If there are further issues arising without those flags, please re-open a new issue. Thanks! |
The issue is closed, by I thought to add a comment about how to deal with the issue without having to delete the helm release or the running deployments. So, I reproduced the issue with the following steps:
I was able to upgrade after correcting the port to a number, without running
|
Repro
Create a simple Chart.yaml:
With a single K8S resource in the
templates/
dir:Install the chart:
Verify the release exists:
helm status exasperated-op Last Deployed: Tue Sep 13 12:43:23 2016 Namespace: default Status: DEPLOYED Resources: ==> v1/ConfigMap NAME DATA AGE cm1 1 1m
Now add a 2nd K8S resource in
templates/
dir:Upgrade the chart:
That's weird. Bump the version in Chart.yaml:
Try upgrade again:
Expected
helm upgrade
should create thecm2
resource instead of erroring that it doesn't exist.Edit: to be clear: helm is creating the
cm2
ConfigMap, but helm fails regardless.Current state after performing steps
helm status exasperated-op Last Deployed: Tue Sep 13 12:43:23 2016 Namespace: default Status: DEPLOYED Resources: ==> v1/ConfigMap NAME DATA AGE cm1 1 6m kubectl get configmap --namespace default NAME DATA AGE cm1 1 6m cm2 1 4m
The text was updated successfully, but these errors were encountered: