-
Notifications
You must be signed in to change notification settings - Fork 7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to perform helm upgrade due to resource conflict #6850
Comments
Can you provide a set of steps to reproduce the issue? |
@bacongobbler Apologies for the delay. Realised it's harder to reproduce locally with minikube since we've got everything setup for/with AWS EKS. At the moment I can confirm the apiVersion of the serviceMonitor doesn't change so it doesn't seem to be a relation to #6583. When I run helm template the first time:
After upgrading and once the resource gets created successfully, I run helm template again and get back the following below:
After running helm upgrade a second time, I get back the error mentioned above
|
@bacongobbler Going to still try and re-produce the steps locally with minikube but might take longer than expected. |
Facing the same issue here. @bacongobbler , @efernandes-ANDigital Cannot repro it (tried on GKE) |
@aespejel What |
Namespaces, which makes sense having in mind the order helm is trying to apply manifests, right @thomastaylor312 ? |
Yep, Namespaces go first, but I was just checking if this was happening with specific kinds of resources or with an assortment |
Just to add we noticed something else, upon disabling the service monitor. When running helm upgrade, it returns back a success message "Release "bid-mangement" has been upgraded. Happy Helming! etc. However upon checking the api-resources servicemonitors we still see the servicemonitor that was created. |
What we've just noticed is for the same charts with other services it works just fine and we don't have the issue. The services use the same exact charts with just a few configuration changes for the services....very weird |
The problem also happen while trying to install a chart (e.g. prometheus-operator), if the install fails and you try to install it again, helm complains about resource conflict, and if you try to remove the chart, complains saying it never been deployed. |
@vakaobr I doubt it's the same issue. When the first install fails (and only with the first install), as you noticed, helm doesn't make a release. Hence helm wont have any information about a release to compare with already deployed resources and will try to install them showing that message because actually some of the resources were installed. You can probably solve this by using --atomic with the installation or using helm upgrade --install --force being careful with the --force since it will delete and re-create resources. |
Update: still happening after updating to helm v3.0.0 (stable). @efernandes-ANDigital , @bacongobbler , @thomastaylor312 |
Workaround: |
Did you manage to save the release ledger before deleting them? Would've been helpful to reproduce the issue if we had our hands on a solid reproducible case. |
I get this error when trying to change api version of deployment to |
@sheerun did you see my answer in regards to The tl;dr is that you have to manually remove the old object in order to "upgrade". The two schemas are incompatible with each other and therefore cannot be upgraded from one to the next in a clean fashion. Are you aware of any tooling that handles this case? |
This doesn't really help because I still need to manually remove old resources. I'd expect that flag for It's very important issue right now because kubernetes 1.16 just drops support for old apis so we need to upgrade. |
I see your point... We could potentially support a new flag for that. If you have suggestions, we'd love a PR with tests, docs, etc. It's certainly cropping up more and more especially with the 1.16 release, so we'd be happy to look at proposals to handle that case. |
any updates on this? |
#7082 should handle this case if someone wants to start working on that feature. |
If you are having to use these workarounds: #6646 (comment), you can use the following script I created to automate that process: https://gist.github.com/techmexdev/5183be77abb26679e3f5d7ff99171731 |
a similar error
|
Signed-off-by: Jason Liew <jason_liew@163.com>
|
@jason-liew This issue is about different thing that is not related to number of releases. You're fixing other bug with similar error. This bug is related to change of resource's api version. |
@sheerun sorry, i have delete the reference in the commit message and edit above comment |
What is the real problem here? It's possible to do update object with kubectl even with api changes without any issues. The object does not have to be deleted (can be simply kubectl apply/replace) why Helm can't do the same? |
@bacongobbler I agree that from k8s point of view, it's the broken change between API versions. However, in k8s there be design to handle such a case to migrate one object from one version to another. Thanks |
A k8s single object may convert from one version to another version if they are compatible. See https://github.com/kubernetes/kubernetes/blob/master/pkg/apis/apps/v1/conversion.go as an example. |
I've ran into an issue that's related to this. In my case, I've enabled
which makes sense, but blocks provisioning. I've tried this with and without I don't know if this has been floated before, but maybe there could be a way to tell Helm to "adopt" resources if they already exist – i.e., in case of an existing namespace it would be patched with user-supplied manifest and understood to be managed by Helm from that point. |
That is a conversion from apps/v1 to another internal representation. You cannot use this to convert from v1beta1 to v1. Look at the code more closely.
Kubernetes clusters support multiple API versions, but they are treated as separate discrete objects. The internal schemas are completely different. There is no "convert from v1beta1 to v1" API we're aware of at this time. |
See #2730 |
@bacongobbler thanks for your answers and help here. I have same issue with api version, but in cluster itself our deployment have - apiVersion: apps/v1 Its really not convenient that you need to reinstall production workload just to fix Helm metadata, since real deployment have correct API version. Any suggestions here? I am thinking to tweak metadata manually. |
|
@bacongobbler
|
Hey, I am having the same issue. My issue is regarding a pre existing stateful set. Any advice would be much appreciated. Thanks, |
Let's open up discussions: |
Hello, I am facing same issue the only think i have done i upgraded from helm 2.14.1 to latest, we are getting the error as mentioned above : **rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: kind: Service, namespace: *, name: *** . All stuff mentioned up about deleting that wont work for us as this is production and the API is critical with 0 down time required. kindly assist ... Thanks |
Here's a dirty hack that we use whenever a resource, such as a PV or PVC, already exists and we don't want to delete it, but do want to upgrade containers. This typically happens whenever we do a
|
I got this error just following the basic tutorial |
Hitting this for a
|
Same issue when installing chart in two namespaces. My chart depends on prometheus-operator chart which will create ClusterRole . |
Same here. I migrated the helm2 to a helm 3 deployment and afterwards its no longer upgradeable because of the same error
|
Could someone clarify what the solution is here? I see that this got reopened then closed again 39 minutes later but I didn't see an obvious solution in this thread. |
There is no solution yet but this one is promising and almost ready to implement: |
#7649 was merged this morning. |
Ohh, missed that ;) well, then answer to @micseydel question is in the first post of #7649 in Release Notes section |
Output of
helm version
: v3.0.0-rc.1Output of
kubectl version
: Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T12:36:28Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"darwin/amd64"}Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.10-eks-5ac0f1", GitCommit:"5ac0f1d9ab2c254ea2b0ce3534fd72932094c6e1", GitTreeState:"clean", BuildDate:"2019-08-20T22:39:46Z", GoVersion:"go1.11.13", Compiler:"gc", Platform:"linux/amd64"}
Cloud Provider/Platform (AKS, GKE, Minikube etc.): AWS
Seem to be experiencing a weird bug when doing helm upgrade. The error states "Error: UPGRADE FAILED: rendered manifests contain a new resource that already exists. Unable to continue with update: existing resource conflict: kind: ServiceMonitor, namespace: dcd, name: bid-management".
We've tested on the following helm versions:
Helm Version:"v3.0.0-beta.2", "v3.0.0-beta.3"
We get the following error - "Error: UPGRADE FAILED: no ServiceMonitor with the name "bid-management" found". Though I can confirm it exists.
Helm Version:"v3.0.0-rc.1", "3.0.0-beta.4", "3.0.0-beta.5"
We get the error above "Error: UPGRADE FAILED: rendered manifests contain a new resource that already exists. Unable to continue with update: existing resource conflict: kind: ServiceMonitor, namespace: dcd, name: bid-management"
The text was updated successfully, but these errors were encountered: