-
Notifications
You must be signed in to change notification settings - Fork 7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Helm install v2.14.0 "validation failed" error when using a template variable with value "" #5750
Comments
This bit me today as well. |
Looks like it might be due to this commit: TL;DR - Fix manifest validation Unfortunately, if you have something already deployed with an empty string you can't deploy something that's "fixed" as your already deployed components will fail validation. Your only recourse is to |
As discussed with a community member earlier today, #5576 was the change. Prior to 2.14, Helm silently accepted schema validation errors, but as of 2.14, all manifests are validated, including ones that were previously accepted. The end result being that upgrading to 2.14 causes Tiller to fail manifest validation on charts that were previously accepted, preventing upgrades. Sorry about that! The mitigation for this is easy: downgrade to 2.13.1 to upgrade for now until a fix is released. #5643 should fix this as the validation only occurs for new manifests being added to the release, and we'd love to hear if that solves the issues raised in here. If so, we may need to cut a 2.14.1 with the fix. |
I can try #5643 tomorrow unless someone beats me to it. |
Wouldn't the prudent course of action to surface validation errors that were 'silently' failing before, with a warning ,that in a future release becomes an error? Or at the least, honor a force flag or some such that allows the user to choose how to handle it? |
Thank you everyone for your replies! |
Check the output of |
Same, using k8s 1.8.4 : `Error: error validating "": error validating data: [ValidationError(Deployment.spec.template.spec.containers[1].ports[0]): unknown field "exec" in io.k8s.api.core.v1.ContainerPort, ValidationError(Deployment.spec.template.spec.containers[1].ports[0]): unknown field "initialDelaySeconds" in io.k8s.api.core.v1.ContainerPort] Error: error validating "": error validating data: ValidationError(StatefulSet.spec): missing required field "serviceName" in io.k8s.api.apps.v1beta1.StatefulSetSpec Error: UPGRADE FAILED: error validating "": error validating data: ValidationError(StatefulSet.spec): missing required field "serviceName" in io.k8s.api.apps.v1beta1.StatefulSetSpec` |
Thanks @bacongobbler ! Following your comment I upgraded my tiller:
However, I'm still getting the exact same error when running The commit corresponds to 9fb1996, so it looks like the issue still reproduces in my case despite the fix. |
Can we get an ETA for the hotfix? I really would like to avoid patching my server with a self-build helm/tiller. Thank you! |
@daniv-msft Could you try doing this in your template yaml?
|
we have a lot of azure devops release pipelines (30+) and each of them is trying to keep helm at the latest stable build version. I could downgrade for now but once the next build pipeline is started the version would be back on 2.14.0 and I really don't go over all the 30+ to disable the step and enable it later back again. Sorry, but I need to wait for the hotfix. Is there any ETA on the hotfix?
This is my content of the deployment.yaml file that matches the path
Do you think a |
@SeriousM Oh. What error are you getting currently? And I tried the
If you don't use
which will lead to a validation error that is described in the issue. I verified this by using this dummy chart that I created: |
My error |
I tried to modify my deployment.yaml by adding This is the error I got when I removed the buildID annotation:
|
In regards to a 2.14.1 release, we probably won't be able to cut a release until after KubeCon. |
KubeCon EU, which is next week, not November. :) |
So we can expect a fix in form of v2.14.1 at the 24.5.19 ? |
@SeriousM The first error you got, which is The next error |
How can I deploy this image? |
@SeriousM With any helm client version, you can use
To get the latest helm client (master), you can use this : https://helm.sh/docs/using_helm/#from-canary-builds |
So it looks like #5643 does fix the manifest validation issue:
Still have to set any missing validation fields in the "new" templates of course, but it will at least let you deploy over existing releases. |
Thank you very much, I will try it out asap |
Same thing here: I tested the fix from the latest canary build and it works for us too. |
This worked for me, thanks very much |
Does anyone knows how to prevent gitlab pipelines to use helm:latest? We are deploying everything via our laptops since gitlab uses 2.14. It's taking us lots of time. |
@pulpbill How do you install or get the helm client in GitLab ? Do you download from releases? or use a docker image? And you want to install I tried to find a docker image for helm, but couldn't find any official ones. If you want, you could build a docker image by downloading and putting helm binary in it and then you can use that image in gitlab ci config. You can find the url for downloading binaries (all versions) from releases page - https://github.com/helm/helm/releases . And you can do the same (download and install in |
Let's try to keep the topic on subject. @pulpbill if you don't mind sending an email to the helm-users mailing list or by asking the gitlab team directly that'd be great; this seems like an issue with gitlab moreso than with Helm, and it doesn't seem related to the issue present here. |
AutoDevops downloads helm when it runs the deploy job if you take a look here: https://gitlab.com/gitlab-org/gitlab-ce/blob/master/lib/gitlab/ci/templates/Jobs/Deploy.gitlab-ci.yml#L472 |
Thank you @karuppiah7890 and @mitchellmaler Thanks for the tips! I remember I raised and issue to gitlab for autodevops. I will have to wait for the release of 2.4.1, don't have the time right now to build a new pipeline :( @bacongobbler Sorry for the off-topic! |
There is a breaking change in helm 2.14.0 [0]. This is expected to be fixed in helm 2.14.1, reverting until we can update to that. [0]: helm/helm#5750 This reverts commit 89d98fb. Change-Id: Ica6d51b5c67a26c356804fd69d466e88ad5c216b
Helm v2.14.1 has been released: https://github.com/helm/helm/releases/tag/v2.14.1 |
Hello Team, We are facing this issue in the newly released v3 as well but not in the beta version v3.0.0-beta.4. Kindly help with the resolution. |
See helm/helm#5750 Signed-off-by: scottrigby <scott@r6by.com>
this is still happening on |
Happening on 3.2.3 too on Mac
|
I don't have access to the old code, but I did have a real issue on my chart which resulted in and error on v3.3.0 and the error was gone when I fixed it |
Hello,
After upgrading from v2.13.1 to v2.14.0, my chart now throws an error on helm install:
Error: validation failed: error validating "": error validating data: unknown object type "nil" in Deployment.spec.template.metadata.annotations.buildID
This seems to be due to the use in deployment.yaml file of a template variable "buildID" that is actually never declared in values.yaml.
Extract from deployment.yaml:
If I set the buildID variable in values.yaml file to "", I get the same error.
If I set the buildID variable in values.yaml file to any other string, such as "a", then my install works.
If I set "" to buildID in deployment.yaml (buildID: {{ "" }}), I get the same error.
If I set "" directly to buildID in deployment.yaml (buildID: ""), then my install works.
Could you please let me know if this is a known issue, or if I am missing anything here?
Thanks!
Output of
helm version
:Client: &version.Version{SemVer:"v2.14.0", GitCommit:"05811b84a3f93603dd6c2fcfe57944dfa7ab7fd0", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.0", GitCommit:"05811b84a3f93603dd6c2fcfe57944dfa7ab7fd0", GitTreeState:"clean"}
Output of
kubectl version
:Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:38:32Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.7", GitCommit:"6f482974b76db3f1e0f5d24605a9d1d38fad9a2b", GitTreeState:"clean", BuildDate:"2019-03-25T02:41:57Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}
Cloud Provider/Platform:
AKS
The text was updated successfully, but these errors were encountered: