Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running Kubernetes Locally via Docker - kubectl get nodes returns The connection to the server localhost:8080 was refused - did you specify the right host or port? #23726

Closed
xificurC opened this issue Apr 1, 2016 · 62 comments
Labels
kind/support Categorizes issue or PR as a support question.

Comments

@xificurC
Copy link

xificurC commented Apr 1, 2016

Going through this guide to set up kubernetes locally via docker I end up with the error message as stated above.

Steps taken:

  • export K8S_VERSION='1.3.0-alpha.1' (tried 1.2.0 as well)
  • copy-paste the docker run command
  • download the appropriate kubectl binary and put in on PATH (which kubectl works)
  • (optionally) setup the cluster
  • run kubectl get nodes

In short, no magic. I am running this locally on Ubuntu 14.04, docker 1.10.3. If you need more information let me know

@jankoprowski
Copy link

Have the same issue with version 1.2.0 +1

@bgrant0607
Copy link
Member

@xificurC @jankoprowski Have you checked whether the apiserver is running?

Please take a look at our troubleshooting guide:
http://kubernetes.io/docs/troubleshooting/

If you still need help, please ask on stackoverflow.

@jankoprowski
Copy link

apiserver failed with:

F0421 14:28:55.140493 1 server.go:410] Invalid Authentication Config: open /srv/kubernetes/basic_auth.csv: no such file or directory

@ant-caichu
Copy link

I also met that problem and my apiserver is not failed,all the process(apiserver,controller-manager,schdeuler,kublet and kube-proxy) runinng normally. My docker version is 1.11.2,if anyone knows how to resolve this problem?

@ShengjieLuo
Copy link

I have met this problems too. Since I need to use Kubernetes1.2.2, I use docker to deploy the kubernetes. The same problem happens. The apiserver is down. Logs here,

I0725 08:56:20.440089       1 genericapiserver.go:82] Adding storage destination for group batch
W0725 08:56:20.440127       1 server.go:383] No RSA key provided, service account token authentication disabled
F0725 08:56:20.440148       1 server.go:410] Invalid Authentication Config: open /srv/kubernetes/basic_auth.csv: no such file or directory

The apiserver is failed and I cannot deploy Kubernetes. Does anyone know about it?

@SylarChen
Copy link

Try using --server to specify your master:
kubectl --server=16.187.189.90:8080 get pod -o wide

@rahmanusta
Copy link

Hello I'm getting the following error on Centos 7, how can solve this issue?

[root@ip-172-31-11-12 system]# kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?

@kvarnhammar
Copy link

You can solve this with "kubectl config":

$ kubectl config set-cluster demo-cluster --server=http://master.example.com:8080
$ kubectl config set-context demo-system --cluster=demo-cluster
$ kubectl config use-context demo-system
$ kubectl get nodes
NAME                 STATUS    AGE
master.example.com   Ready     3h
node1.example.com    Ready     2h
node2.example.com    Ready     2h

@alexanderilyin
Copy link

In my case I had just to remove ~/.kube/config which left from previous attempt.

@FrankYu
Copy link

FrankYu commented Mar 7, 2017

Hi,
I still met this problem with
kubernetes-master-1.4.0-0.1.git87d9d8d.el7.x86_64
kubernetes-node-1.4.0-0.1.git87d9d8d.el7.x86_64
kubernetes-unit-test-1.4.0-0.1.git87d9d8d.el7.x86_64
kubernetes-ansible-0.6.0-0.1.gitd65ebd5.el7.noarch
kubernetes-client-1.4.0-0.1.git87d9d8d.el7.x86_64
kubernetes-1.4.0-0.1.git87d9d8d.el7.x86_64

if I config KUBE_API_ADDRESS with below value
KUBE_API_ADDRESS="--insecure-bind-address=10.10.10.xx"
I met this error, and it work if I pass options "--server=10.10.10.xx:8080" to cmd

if I config KUBE_API_ADDRESS with below value
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
it works good.

@quasardriod
Copy link

quasardriod commented Apr 5, 2017

I was trying to get status from remote system using ansible and I was facing same issue.
I tried and it worked.
kubectl --kubeconfig ./admin.conf get pods --all-namespaces -o wide

@mamirkhani
Copy link

mamirkhani commented Apr 9, 2017

Similar to @sumitkau, I solved my problem with setting new kubelet config location using:
kubectl --kubeconfig /etc/kubernetes/admin.conf get no
You can also copy /etc/kubernetes/admin.conf to ~/.kube/config and it works, but I don't know that it's a good work or not!

@pgopina1
Copy link

update the entry in /etc/kubernetes/apiserver ( on master server)
KUBE_API_PORT="--port=8080"
then do a systemctl restart kube-apiserver

@nestoru
Copy link

nestoru commented Apr 29, 2017

If this happens in GCP, the below most likely will resolve the issue:

gcloud container clusters get-credentials your-cluster --zone your-zone --project your-project

@yueawang
Copy link

yueawang commented May 5, 2017

Thanks to @mamirkhani. I solved this error.
However I just found such info in "kubeadm init" output:
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run (as a regular user):
sudo cp /etc/kubernetes/admin.conf $HOME/
sudo chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf

I think this is the recommended solution.

@crashev
Copy link

crashev commented Jun 12, 2017

I had the same problem. When creating cluster via web gui in google cloud and trying to run kubectl I get

The connection to the server localhost:8080 was refused - did you specify the right host or port?

everything you have to do is fetch kubectl config for your cluser which will be stored in $HOME/.kubectl/config:

$ gcloud container clusters get-credentials guestbook2
Fetching cluster endpoint and auth data.
kubeconfig entry generated for guestbook2.

Now kubectl works just fine

@Komorebi-E
Copy link

kubectl is expecting ~/.kube/config as the filename for its configuration.

The quick fix that worked for me was to create a symbolic link:

ln -s ~/.kube/config.conjure-canonical-kubern-e82 ~/.kube/config

N.B. This was for a "conjure-up kubernetes" deployment.

@pengyue
Copy link

pengyue commented Aug 18, 2017

This issue has been confused me for 1 week, it seems to be working for me now. If you have this issue, first of all, you need to know which node it happens on.

If it is a master node, then make sure all of kubernetes pods are running by command
kubectl get pods --all-namespaces,

mine looks like this
kube-system etcd-kubernetes-master01 1/1 Running 2 6d kube-system kube-apiserver-kubernetes-master01 1/1 Running 3 6d kube-system kube-controller-manager-kubernetes-master01 1/1 Running 2 6d kube-system kube-dns-2425271678-3kkl1 3/3 Running 6 6d kube-system kube-flannel-ds-brw34 2/2 Running 6 6d kube-system kube-flannel-ds-psxc8 2/2 Running 7 6d kube-system kube-proxy-45n1h 1/1 Running 2 6d kube-system kube-proxy-fsn6f 1/1 Running 2 6d kube-system kube-scheduler-kubernetes-master01 1/1 Running 2 6d

if it does not, then verify if you have those files in your /etc/kubernetes/ directory,
admin.conf controller-manager.conf kubelet.conf manifests pki scheduler.conf, if you do, then copy those files with a normal user (not ROOT user)
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config

and then see if kubectl version works or not, if it still does not work, then follow the tutorial at https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/ and tear down your cluster and rebuilt your master.

If it happens on (slave) nodes, then make sure if you have the files
kubelet.conf manifests pki
in your directory of /etc/kubernetes/, and in this kubelet.conf, the server field should point to your master IP, which is the same settings as your master node admin.conf,
If you dont have the kubelet.conf, that is probably because you haven'r run the command to join your nodes with your master
kubeadm join --token f34tverg45ytt34tt 192.168.1.170:6443, you should get this command (token) after your master node is built.

after login as normal user on (slave) node, you probably wont see a config file in your ~/.kube, then create this folder then copy admin.conf from your master node into your ~/.kube/ directory on this (slave) node as config with a normal user, and then do the copy and try kubectl version, it works for me.

@mnarusze
Copy link

mnarusze commented Oct 4, 2017

While I know that there might be multiple reasons for failure here, in my case removing ~/.kube/cache helped immediately.

@th0j
Copy link

th0j commented Oct 24, 2017

I have this issues. This solution work for me:

sudo cp /etc/kubernetes/admin.conf $HOME/
sudo chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf

If you don't have admin.conf, plz install kubeadm
And then remove ~/.kube/cache

rm -rf ~/.kube/cache

@IshwarChandra
Copy link

IshwarChandra commented Jan 8, 2018

You need to switch context.
kubectl config use-context docker-for-desktop

@karthiekchowdary1
Copy link

HI Team,

we need to install sap vora, for that kubernetes and Docker are prerequisites. we have installed kubernetes master and kubectl, docker . but when we are checking

kubectl cluster-info

#kubectl cluster-info dump
2018-05-09 06:47:57.905806 I | proto: duplicate proto type registered: google.protobuf.Any
2018-05-09 06:47:57.905997 I | proto: duplicate proto type registered: google.protobuf.Duration
2018-05-09 06:47:57.906019 I | proto: duplicate proto type registered: google.protobuf.Timestamp
The connection to the server 10.x.x.x:6443 was refused - did you specify the right host or port?

when we checked systemctl status kubelet -l

kubelet.service - Kubernetes Kubelet Server
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled)
Active: failed (Result: start-limit) since Wed 2018-05-09 04:17:21 EDT; 2h 28min ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Process: 2513 ExecStart=/usr/bin/hyperkube kubelet $KUBE_LOGTOSTDERR $KUBE_LOG_LEVEL $KUBELET_API_SERVER $KUBELET_ADDRESS $KUBELET_PORT $KUBELET_HOSTNAME $KUBE_ALLOW_PRIV $KUBELET_INITIAL_ARGS $KUBELET_ARGS (code=exited, status=203/EXEC)
Main PID: 2513 (code=exited, status=203/EXEC)

we have performed below settings

sudo cp /etc/kubernetes/admin.conf $HOME/
sudo chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf

but no use. can anyone help

regards
karthik

@StrangeDreamer
Copy link

删除minikube虚机及配置文件,重新安装minikube(v0.25.2),其他版本可能会有坑

$ minikube delete
$ rm -rf ~/.minikube
$ curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.25.2/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/

@prashantmaheshwari
Copy link

Use below command. It worked for me.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

@Satheesh-Balachandran
Copy link

Use below command. It worked for me.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Thanks! this worked!

@mittaus
Copy link

mittaus commented Jan 14, 2019

In my case, I had rebooted the master node of kubernetes, and when restarting, the SWAP partition of memory exchange is enabled by default

  1. sudo systemctl status kubelet
kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf, 90-local-extras.conf
   Active: activating (auto-restart) (Result: exit-code) since 금 2018-04-20 15:27:00 KST; 6s ago
     Docs: http://kubernetes.io/docs/
  Process: 17247 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CGROUP_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=255)
 Main PID: 17247 (code=exited, status=255)
  1. sudo swapon -s
Filename	type 		size	Used	priority
/dev/sda6	partition	950267	3580	-1
  1. sudo swapoff /dev/sda6

  2. sudo systemctl status kubelet

● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Mon 2019-01-14 08:28:56 -05; 15min ago
     Docs: https://kubernetes.io/docs/home/
 Main PID: 7018 (kubelet)
    Tasks: 25 (limit: 3319)
   CGroup: /system.slice/kubelet.service
           └─7018 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes
  1. kubectl get nodes
NAME        STATUS   ROLES    AGE   VERSION
k8smaster   Ready    master   47h   v1.13.2
k8snode1    Ready    <none>   45h   v1.13.2
k8snode2    Ready    <none>   45h   v1.13.2

@tennessine
Copy link

I didn't run this.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

caused the problem.

@jimifm
Copy link

jimifm commented May 15, 2019

ip route add default via xxx.xxx.xxx.xxx on k8s master

@AlvaWymer
Copy link

$ kubectl apply -f Deployment.yaml
unable to recognize "Deployment.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "Deployment.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused

@FangyuanZhang34
Copy link

Running on Mac OS High Sierra, I solved this by enabling Kubernetes built into Docker itself.

screen shot 2018-10-16 at 12 48 10 pm

screen shot 2018-10-16 at 12 48 49 pm

It works. Quite simple. If you are using desktop software, better find the solution from the preference setting first. haha.

@facejiong
Copy link

Running on Mac OS High Sierra, I solved this by enabling Kubernetes built into Docker itself.

screen shot 2018-10-16 at 12 48 10 pm

screen shot 2018-10-16 at 12 48 49 pm

tks

@dianaabv
Copy link

well, it may sound stupid, but maybe you didn`t install miniKube to run your cluster locally

@Fredmomo
Copy link

try reinstall minikube if you have one or try using kubectl proxy --port=8080.

@kinowarrior
Copy link

Ok, on docker for Mac (v 2.0.5.0) there are TWO settings that both need to be toggled.

docker

@mydockergit
Copy link

Make sure it removes all the containers

docker rm -f $(docker ps -aq)

After you make sure all the containers have been removed, restart kubelet

systemctl restart kubelet

@mayuchau
Copy link

[mayuchau@cg-id .kube]$ kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?

I am getting above error. I tried above mentioned solutions but it didn't work for me.

@mayuchau
Copy link

[mayuchau@cg-id .kube]$ kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?

I am getting above error. I tried above mentioned solutions but it didn't work for me.

Issue Resolved after verifying permissions of /var/run/docker.sock in master node

@gmjjatin
Copy link

Here is how I resolved it:

  1. Make sure kubectl is installed. Check it using:
    gcloud components list
    If no install kubectl first.

  2. Go to your project's Kubernetes engine console on gcloud platform.

  3. There connect with the cluster in which your project resides. It will give you a command that you have run in your local command prompt/terminal. For example, it will look like:

gcloud container clusters get-credentials <Cluster_Name> --zone <Zone> --project <Project_Id>

After a successful run of this command you would be able to run:
kubectl get nodes

@GT16
Copy link

GT16 commented Mar 24, 2020

Going through this guide to set up kubernetes locally via docker I end up with the error message as stated above.

Steps taken:

  • export K8S_VERSION='1.3.0-alpha.1' (tried 1.2.0 as well)
  • copy-paste the docker run command
  • download the appropriate kubectl binary and put in on PATH (which kubectl works)
  • (optionally) setup the cluster
  • run kubectl get nodes

In short, no magic. I am running this locally on Ubuntu 14.04, docker 1.10.3. If you need more information let me know

Thanks!!!
This reminded me that I didn't have a variable export in my ~/.bashrc for a KUBEKONFIG system variable.
Adding that fixed my issue!

E.g.:
### ADD in ~/.bashrc
export KUBECONFIG=$HOME/.kube/eksctl/clusters/serv-eks-dev

@raftAtGit
Copy link

one possible cause of this problem is, the current context in kube config is deleted with some tool and no current context remains.

check with:

kubectl config get-contexts

and if there is no current context, make one current with:

kubectl config use-context <context name>

@dst-91
Copy link

dst-91 commented Jul 14, 2020

I faced similar issue which was resolved with
export KUBECONFIG=/etc/kubernetes/admin.conf

@paulmwatson
Copy link

If it helps anyone (I came here via Google search on the error) my Docker Desktop for Mac had Kubernetes disabled by default. Ticking Enabled Kubernetes and Apply & Restart sorted out the error.
image

@navkmurthy
Copy link

navkmurthy commented Aug 11, 2020

In Mac OS : I am Running Kubernetes Locally via Docker ,to be specific https://k3d.io/- So post installation, once the cluster is created ,if i execute the command kubectl cluster-info returns
The connection to the server 0.0.0.0:51939 was refused - did you specify the right host or port? Does anyone have any pointers to this issue?

PS: Docker , docker-machine installed via Homebrew

@paulmwatson
Copy link

What does kubectl config get-contexts return @navkmurthy?

@navkmurthy
Copy link

navkmurthy$ k3d cluster create -p 5432:30080@agent[0] -p 9082:30081@agent[0] --agents 3 --update-default-kubeconfig
INFO[0000] Created network 'k3d-k3s-default'
INFO[0000] Created volume 'k3d-k3s-default-images'
INFO[0001] Creating node 'k3d-k3s-default-server-0'
INFO[0001] Creating node 'k3d-k3s-default-agent-0'
INFO[0002] Creating node 'k3d-k3s-default-agent-1'
INFO[0003] Creating node 'k3d-k3s-default-agent-2'
INFO[0004] Creating LoadBalancer 'k3d-k3s-default-serverlb'
INFO[0024] Cluster 'k3s-default' created successfully!
INFO[0024] You can now use it like this:
kubectl cluster-info
navkmurthy$ kubectl cluster-info

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server 0.0.0.0:53706 was refused - did you specify the right host or port?

@navkmurthy
Copy link

navkmurthy commented Aug 11, 2020

@paulmwatson

navkmurthy$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE

  •     k3d-k3s-default                                              k3d-k3s-default                                              admin@k3d-k3s-default
    

navkmurthy$

@sgc109
Copy link

sgc109 commented Jan 26, 2021

In my case, I haven't run minikube start after minikube cluster has been deleted automatically somehow.
Try to check this if you still see the same error message even after you enabled kubernetes in preferences of Docker Desktop

@yuukiii
Copy link

yuukiii commented Feb 4, 2021

I did a minikube status which indicated that the kubectl had a stale pointer,
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run minikube update-context

I then ran minikube update-context and then minikube start --driver=docker
After that kubectl get pods worked:

NAME READY STATUS RESTARTS AGE
kubernetes-bootcamp-57978f5f5d-96b97 1/1 Running 1 47h

@scientiacoder
Copy link

Thanks to @mamirkhani. I solved this error.
However I just found such info in "kubeadm init" output:
Your Kubernetes master has initialized successfully! To start using your cluster, you need to run (as a regular user): sudo cp /etc/kubernetes/admin.conf $HOME/ sudo chown $(id -u):$(id -g) $HOME/admin.conf export KUBECONFIG=$HOME/admin.conf

I think this is the recommended solution.

This answer works for me cause the machine needs to know where the master(admin) is, not localhost

@therealaditigupta
Copy link

My issue happened on RHEL and it turned out that my Docker daemon was inactive.

How to fix this issue:
Check Docker status: sudo service docker status
Restart the Docker engine: sudo service docker restart
Check the status again -- it should be up and running now

@ghost
Copy link

ghost commented May 28, 2021

Had the same problem, my setup has 3 nodes (1 control and 2 workers).
When I issued kubectl get nodes on workers i've got:

[asd1@kubevm-worker1 ~]$ kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?
[asd1@kubevm-worker2 ~]$ kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?

Solved it by:

  • on control node, I've made "sudo cat /etc/kubernetes/admin.conf"

  • moved to the workers nodes and made as a normal user (NOT root):
    cd ~/.kube/
    vi config (pasted the output from sudo cat /etc/kubernetes/admin.conf) and saved the file
    sudo chown $(id -u):$(id -g) $HOME/.kube/config

  • reboot the 3 VM's

After this:

[asd1@kubevm-worker2 ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubevm-control Ready control-plane,master 89m v1.21.1
kubevm-worker1 Ready 59m v1.21.1
kubevm-worker2 Ready 58m v1.21.1
[asd1@kubevm-worker2 ~]$

[asd1@kubevm-worker1 .kube]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubevm-control Ready control-plane,master 101m v1.21.1
kubevm-worker1 Ready 71m v1.21.1
kubevm-worker2 Ready 71m v1.21.1
[asd1@kubevm-worker1 .kube]$

@codingroses
Copy link

Running on Mac OS High Sierra, I solved this by enabling Kubernetes built into Docker itself.
screen shot 2018-10-16 at 12 48 10 pm
screen shot 2018-10-16 at 12 48 49 pm

It works. Quite simple. If you are using desktop software, better find the solution from the preference setting first. haha.

Nope, still doesn't work. And yes, this was the first thing I also tried.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question.
Projects
None yet
Development

No branches or pull requests