Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: no available release name found #3055

Closed
zavalit opened this issue Oct 23, 2017 · 27 comments
Closed

Error: no available release name found #3055

zavalit opened this issue Oct 23, 2017 · 27 comments

Comments

@zavalit
Copy link

zavalit commented Oct 23, 2017

Hi folks
i just don't have any clue what is going wrong.

after the first time trying to run:

$ helm install stable/mongodb-replicaset
Error: no available release name found

i "disabled" RBAC

kubectl create clusterrolebinding permissive-binding --clusterrole=cluster-admin --user=admin --user=kubelet --group=system:serviceaccounts 

but nothing have changed:

$ helm install stable/mongodb-replicaset
Error: no available release name found

kubernetes

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2017-09-28T22:57:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.1", GitCommit:"f38e43b221d08850172a9a4ea785a86a3ffa3b3a", GitTreeState:"clean", BuildDate:"2017-10-11T23:16:41Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

helm

$ helm version
Client: &version.Version{SemVer:"v2.6.2", GitCommit:"be3ae4ea91b2960be98c07e8f73754e67e87963c", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.6.2", GitCommit:"be3ae4ea91b2960be98c07e8f73754e67e87963c", GitTreeState:"clean"}

helms repos

$ helm search | grep mongo
stable/mongodb               	0.4.17 	NoSQL document-oriented database that stores JS...
stable/mongodb-replicaset    	2.1.2  	NoSQL document-oriented database that stores JS...

tiller pod

$ kubectl get pods --all-namespaces | grep tiller
kube-system   tiller-deploy-5cd755f8f-c8nnl               1/1       Running   0          22m

tiller log

[tiller] 2017/10/23 19:12:50 preparing install for
[storage] 2017/10/23 19:12:50 getting release "busted-shark.v1"
[storage/driver] 2017/10/23 19:13:20 get: failed to get "busted-shark.v1": Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/busted-shark.v1: dial tcp 10.96.0.1:443: i/o timeout
[tiller] 2017/10/23 19:13:20 info: generated name busted-shark is taken. Searching again.
[storage] 2017/10/23 19:13:20 getting release "lucky-rabbit.v1"
[storage/driver] 2017/10/23 19:13:50 get: failed to get "lucky-rabbit.v1": Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/lucky-rabbit.v1: dial tcp 10.96.0.1:443: i/o timeout
[tiller] 2017/10/23 19:13:50 info: generated name lucky-rabbit is taken. Searching again.
[storage] 2017/10/23 19:13:50 getting release "exiled-lynx.v1"
[storage/driver] 2017/10/23 19:14:20 get: failed to get "exiled-lynx.v1": Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/exiled-lynx.v1: dial tcp 10.96.0.1:443: i/o timeout
[tiller] 2017/10/23 19:14:20 info: generated name exiled-lynx is taken. Searching again.
[storage] 2017/10/23 19:14:20 getting release "eloping-echidna.v1"
[storage/driver] 2017/10/23 19:14:50 get: failed to get "eloping-echidna.v1": Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/eloping-echidna.v1: dial tcp 10.96.0.1:443: i/o timeout
[tiller] 2017/10/23 19:14:50 info: generated name eloping-echidna is taken. Searching again.
[storage] 2017/10/23 19:14:50 getting release "soft-salamander.v1"
[storage/driver] 2017/10/23 19:15:20 get: failed to get "soft-salamander.v1": Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/soft-salamander.v1: dial tcp 10.96.0.1:443: i/o timeout
[tiller] 2017/10/23 19:15:20 info: generated name soft-salamander is taken. Searching again.
[tiller] 2017/10/23 19:15:20 warning: No available release names found after 5 tries
[tiller] 2017/10/23 19:15:20 failed install prepare step: no available release name found
@bacongobbler
Copy link
Member

bacongobbler commented Oct 23, 2017

Kubernetes 1.8 support was only recently added in helm v2.7.0 so I wouldn't expect Helm v2.6.2 to work with a 1.8 cluster. Can you try the v2.7.0-rc1 release and see if that works? Installing the v2.7.0-rc1 binary locally and running helm reset && helm init should do the trick. Thanks! :)

@zavalit
Copy link
Author

zavalit commented Oct 26, 2017

@bacongobbler thanks for a hint, but didn't change that match

helm version
Client: &version.Version{SemVer:"v2.7.0", GitCommit:"08c1144f5eb3e3b636d9775617287cc26e53dba4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.7.0", GitCommit:"08c1144f5eb3e3b636d9775617287cc26e53dba4", GitTreeState:"clean"}

and when i try it again:

$ helm install stable/mongodb-replicaset
Error: no available release name found

with the following log:

[tiller] 2017/10/26 18:11:22 preparing install for
[storage] 2017/10/26 18:11:22 getting release "listless-toucan.v1"
[storage/driver] 2017/10/26 18:11:36 get: failed to get "zealous-panther.v1": Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/zealous-panther.v1: dial tcp 10.96.0.1:443: i/o timeout
[tiller] 2017/10/26 18:11:36 info: generated name zealous-panther is taken. Searching again.
[storage] 2017/10/26 18:11:36 getting release "terrifying-serval.v1"
[storage/driver] 2017/10/26 18:11:52 get: failed to get "listless-toucan.v1": Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/listless-toucan.v1: dial tcp 10.96.0.1:443: i/o timeout
[tiller] 2017/10/26 18:11:52 info: generated name listless-toucan is taken. Searching again.
[storage] 2017/10/26 18:11:52 getting release "jittery-rat.v1"
[storage/driver] 2017/10/26 18:12:06 get: failed to get "terrifying-serval.v1": Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/terrifying-serval.v1: dial tcp 10.96.0.1:443: i/o timeout
[tiller] 2017/10/26 18:12:06 info: generated name terrifying-serval is taken. Searching again.
[storage] 2017/10/26 18:12:06 getting release "wayfaring-dachshund.v1"
[storage/driver] 2017/10/26 18:12:22 get: failed to get "jittery-rat.v1": Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/jittery-rat.v1: dial tcp 10.96.0.1:443: i/o timeout
[tiller] 2017/10/26 18:12:22 info: generated name jittery-rat is taken. Searching again.
[storage] 2017/10/26 18:12:22 getting release "lucky-arachnid.v1"
[storage/driver] 2017/10/26 18:12:36 get: failed to get "wayfaring-dachshund.v1": Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/wayfaring-dachshund.v1: dial tcp 10.96.0.1:443: i/o timeout
[tiller] 2017/10/26 18:12:36 info: generated name wayfaring-dachshund is taken. Searching again.
[storage] 2017/10/26 18:12:36 getting release "gangly-lambkin.v1"
[storage/driver] 2017/10/26 18:12:52 get: failed to get "lucky-arachnid.v1": Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/lucky-arachnid.v1: dial tcp 10.96.0.1:443: i/o timeout
[tiller] 2017/10/26 18:12:52 info: generated name lucky-arachnid is taken. Searching again.
[storage] 2017/10/26 18:12:52 getting release "boiling-kudu.v1"
[storage/driver] 2017/10/26 18:13:06 get: failed to get "gangly-lambkin.v1": Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/gangly-lambkin.v1: dial tcp 10.96.0.1:443: i/o timeout
[tiller] 2017/10/26 18:13:06 info: generated name gangly-lambkin is taken. Searching again.
[storage] 2017/10/26 18:13:06 getting release "quoting-sloth.v1"
[storage/driver] 2017/10/26 18:13:22 get: failed to get "boiling-kudu.v1": Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/boiling-kudu.v1: dial tcp 10.96.0.1:443: i/o timeout
[tiller] 2017/10/26 18:13:22 info: generated name boiling-kudu is taken. Searching again.
[storage] 2017/10/26 18:13:22 getting release "nordic-rabbit.v1"
[storage/driver] 2017/10/26 18:13:36 get: failed to get "quoting-sloth.v1": Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/quoting-sloth.v1: dial tcp 10.96.0.1:443: i/o timeout
[tiller] 2017/10/26 18:13:36 info: generated name quoting-sloth is taken. Searching again.
[tiller] 2017/10/26 18:13:36 warning: No available release names found after 5 tries
[tiller] 2017/10/26 18:13:36 failed install prepare step: no available release name found
[storage/driver] 2017/10/26 18:13:52 get: failed to get "nordic-rabbit.v1": Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/nordic-rabbit.v1: dial tcp 10.96.0.1:443: i/o timeout
[tiller] 2017/10/26 18:13:52 info: generated name nordic-rabbit is taken. Searching again.
[tiller] 2017/10/26 18:13:52 warning: No available release names found after 5 tries
[tiller] 2017/10/26 18:13:52 failed install prepare step: no available release name found

@zavalit
Copy link
Author

zavalit commented Oct 26, 2017

ok...
i replaced flannel though calico and it get running

@zavalit zavalit closed this as completed Oct 26, 2017
@vhosakot
Copy link

vhosakot commented Jan 9, 2018

Per #2224 (comment), the following commands resolved the error for me:

kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'

@peakyblinder
Copy link

after many approaches, finally, this worked for me, thanks!

kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'

@donbecker
Copy link

The above 3 lines resolved this for me as well.
kubectl client: 1.9.6
kubectl server: 1.8.7
helm client: 2.8.2
helm server: 2.8.2

@viane
Copy link

viane commented Apr 30, 2018

Issue appears and the solution mentioned is not working for:

Kube Client Version: 1.10.1
Kube Server Version: 1.10.1
Helm Client: "v2.9.0"
Helm Server: "v2.9.0"

Also by executing helm list with minikue on, I got error of
Error: Get http://localhost:8080/api/v1/namespaces/kube-system/configmaps?labelSelector=OWNER%!D(MISSING)TILLER: dial tcp 127.0.0.1:8080: connect: connection refused

@bacongobbler
Copy link
Member

@viane try helm init --service-account default; it's another ticket but it results in the same generic error.

@AtebMT
Copy link

AtebMT commented Apr 30, 2018

@viane Try the following steps. (You'll probably need to kubectl delete the tiller service and deployment.)

$ kubectl create serviceaccount --namespace kube-system tiller
$ kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
$ helm init --service-account tiller

That fixed it for me.

@bfin
Copy link

bfin commented Apr 30, 2018

helm reset && helm init didn't work for me, nor did the RBAC solutions above.
Finally got it working again by deleting Tiller and then using the suggestion in #3055 (comment):

kubectl delete deployment tiller-deploy --namespace kube-system
helm init --upgrade --service-account default

@nguyenhuuloc304
Copy link

I encountered the same issue. then I tried following

kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'

with the
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
I got the message "Error from server (BadRequest): invalid character 's' looking for beginning of object key string"

and then I tried following commands

$ kubectl create serviceaccount --namespace kube-system tiller
$ kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
$ helm init --service-account tiller

I got the message:
failed: clusterroles.rbac.authorization.k8s.io .... [clusterroles.rbac.authorization.k8s.io "cluster-admin" not found]

Please help me!...
Below is my information:
helm version

Client: &version.Version{SemVer:"v2.9.0", GitCommit:"f6025bb9ee7daf9fee0026541c90a6f557a3e0bc", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.9.0", GitCommit:"f6025bb9ee7daf9fee0026541c90a6f557a3e0bc", GitTreeState:"clean"}

kubectl version

Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T21:07:38Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.6", GitCommit:"9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState:"clean", BuildDate:"2018-03-21T15:13:31Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

minikube version
minikube version: v0.25.0

The strange is I was using Helm to install stable/nginx-ingress on May 9 and successfully, then I deleted Kubernetes (for practice), then re-install Kubernetes today and install stable/nginx-ingress again .... Ops got above error.

Thank you so much for your support in advance

@deadishlabs
Copy link

@nguyenhuuloc304 I ran into the same issue. I had to make the cluster-admin ClusterRole.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  creationTimestamp: null
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: cluster-admin
rules:
- apiGroups:
  - '*'
  resources:
  - '*'
  verbs:
  - '*'
- nonResourceURLs:
  - '*'
  verbs:
  - '*'

@fox1t
Copy link

fox1t commented Jun 2, 2018

I think it is really important to add this somewhere in the guide. AKS on azure doesn't provide default cluster-admin role and a user has to create it.
jenkins-x/jx#485 (comment)
this was also the case on ACS as we can see here: Azure/acs-engine#1892 (comment)

@CharlieKuharski
Copy link

This worked for me as I tried to helm install redis:
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller --upgrade
helm update repo . # This was the last piece to the puzzle
helm install stable/redis --version 3.3.5

@pulpbill
Copy link

Same here,
kube client: v1.10.4
kube server: v1.9.6
helm client/erver v2.9.1

# helm install stable/prometheus --namespace=monitoring --set rbac.create="true"
Error: no available release name found

# helm search | grep prometheus
coreos/grafana                          0.0.35                                          Grafana instance for kube-prometheus
coreos/kube-prometheus                  0.0.82                                          Manifests, dashboards, and alerting rules for e...
coreos/prometheus                       0.0.43                                          Prometheus instance created by the CoreOS Prome...
coreos/prometheus-operator              0.0.26          0.20.0                          Provides easy monitoring definitions for Kubern...
stable/prometheus                       6.7.2           2.2.1                           Prometheus is a monitoring system and time seri...

Just ran this line and worked, thanks for post it! : kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller

#kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
clusterrolebinding.rbac.authorization.k8s.io "tiller-cluster-rule" created
[root@ip-172-31-90-223 charts]# helm install stable/prometheus --namespace=monitoring --set rbac.create="true"
NAME:   ungaged-sloth
LAST DEPLOYED: Thu Jun 14 23:52:31 2018
NAMESPACE: monitoring
STATUS: DEPLOYED

@zwhitchcox
Copy link

Why does it take so long for Error: no available release name found to show up? It honestly takes 5 minutes for me to get the error message, so the 40,000 things I have to try to get it to work take 5m*40,000

@filipre
Copy link

filipre commented Aug 29, 2018

For me, not a single solution worked. However, I reinstalled minikube as well as tiller and I did this step first:

If your cluster has Role-Based Access Control (RBAC) enabled, you may want to configure a service account and rules before proceeding.

This is indeed mentioned in the documentation but it is a bit confusing since it appears after this paragraph:

If you’re using Helm on a cluster that you completely control, like minikube or a cluster on a private network in which sharing is not a concern, the default installation – which applies no security configuration – is fine, and it’s definitely the easiest. To install Helm without additional security steps, install Helm and then initialize Helm.

@oesgul
Copy link

oesgul commented Oct 22, 2018

Below instructions solved my problem as well for helm v2.11.0 and kube 1.12.1 versions.

$ kubectl create serviceaccount --namespace kube-system tiller
$ kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
$ helm init --service-account tiller

@rangapv
Copy link

rangapv commented Nov 17, 2018

sudo iptables -P FORWARD ACCEPT

The above command is all I had to do to get rid of the error.. none of the other solution seemed to work for me.

Regards
Ranga

@PlugIN73
Copy link

PlugIN73 commented Dec 4, 2018

The same way but with terraform.

  resource "kubernetes_service_account" "tiller" {
    metadata {
      name = "tiller"
      namespace = "kube-system"
    }
  }

  resource "kubernetes_cluster_role_binding" "tiller-cluster-rule" {

    metadata {
      name = "tiller-cluster-rule"
    }

    role_ref {
      kind = "ClusterRole"
      name = "cluster-admin"
      api_group = "rbac.authorization.k8s.io"
    }

    subject {
      kind = "ServiceAccount"
      namespace = "kube-system"
      name = "tiller"
      api_group = ""
    }

    provisioner "local-exec" {
      command = "helm init --service-account tiller"
    }
  }

@rangapv
Copy link

rangapv commented Dec 4, 2018

Did you try this
sudo iptables -P FORWARD ACCEPT
Regards
Ranga

@thomaslees
Copy link

thomaslees commented Jan 8, 2019

I tried all the above options in vain and the one suggested by rangapv worked for me. Thank you.

@cgkades
Copy link

cgkades commented Feb 2, 2019

Nothing above worked.

@rks1212
Copy link

rks1212 commented Feb 2, 2019

None of the above mentioned solution is working.

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.4", GitCommit:"f49fa022dbe63faafd0da106ef7e05a29721d3f1", GitTreeState:"clean", BuildDate:"2018-12-14T07:10:00Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.2", GitCommit:"cff46ab41ff0bb44d8584413b598ad8360ec1def", GitTreeState:"clean", BuildDate:"2019-01-10T23:28:14Z", GoVersion:"go1.11.4", Compiler:"gc", Platform:"linux/amd64"}

$ helm version
Client: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}

$ kubectl create serviceaccount --namespace kube-system tiller
Error from server (AlreadyExists): serviceaccounts "tiller" already exists
Ravis-MacBook-Pro-2:.kube ravi$ kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
Error from server (AlreadyExists): clusterrolebindings.rbac.authorization.k8s.io "tiller-cluster-rule" already exists
Ravis-MacBook-Pro-2:.kube ravi$ helm init --service-account tiller --upgrade
$HELM_HOME has been configured at /Users/ravi/.helm.

Tiller (the Helm server-side component) has been upgraded to the current version.
Happy Helming!
Ravis-MacBook-Pro-2:.kube ravi$ helm update repo
Command "update" is deprecated, use 'helm repo update'

Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈

Ravis-MacBook-Pro-2:.kube ravi$ helm install stable/redis
Error: no available release name found

@ThoTischner
Copy link

ThoTischner commented Mar 25, 2019

Hey,

a more secure solution with no cluster-role admin permission:

  1. Create the following role in the ${TILLER_NAMESPACE}:
TILLER_NAMESPACE='your tiller namespace'
cat <<EOF | kubectl create -n ${TILLER_NAMESPACE} -f -
- kind: Role
  apiVersion: v1
  metadata:
    name: tiller
  rules:
  - apiGroups:
    - ""
    resources:
    - configmaps
    verbs:
    - create
    - get
    - list
    - update
    - delete
  - apiGroups:
    - ""
    resources:
    - namespaces
    verbs:
    - get
EOF
  1. Create service account, bind the local role and patch deploy
kubectl create serviceaccount --namespace ${TILLER_NAMESPACE} tiller
kubectl create rolebinding tiller-rule --role=tiller --serviceaccount=${TILLER_NAMESPACE}:tiller
kubectl patch deploy --namespace ${TILLER_NAMESPACE} tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'

This should fix the above error.

If you want to deploy tiller charts to project you need to give tiller edit permissions:

kubectl create rolebinding tiller-edit-rights -n ${YOUR-PROJECT_NAMESPACE} --clusterrole=edit --serviceaccount=${TILLER_NAMESPACE}:tiller

@Ch3atToW1n
Copy link

None of the above solutions worked for me, but the instructions at the following link did.

https://scriptcrunch.com/helm-error-no-available-release/

@ubaid-qureshi
Copy link

None of the above solutions worked for me, but the instructions at the following link did.

https://scriptcrunch.com/helm-error-no-available-release/

Thanks mate, it works

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests