Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The connection to the server localhost:8080 was refused - did you specify the right host or port? #50295

Closed
AliYmn opened this issue Aug 8, 2017 · 53 comments
Labels
sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle.

Comments

@AliYmn
Copy link

AliYmn commented Aug 8, 2017

Hi,

>> kubectl get pods --all-namespaces | grep dashboard
Result ;
The connection to the server localhost:8080 was refused - did you specify the right host or port?

>> kubectl create -f https://git.io/kube-dashboard

Result ; 

The connection to the server localhost:8080 was refused - did you specify the right host or port?
@k8s-github-robot k8s-github-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Aug 8, 2017
@xiangpengzhao
Copy link
Contributor

Can you check if your kube-apiserver is running and insecure-port 8080 is enabled?

@AliYmn
Copy link
Author

AliYmn commented Aug 8, 2017

@xiangpengzhao No, no employee.

@xiangpengzhao
Copy link
Contributor

It should be running. How do you setup your cluster?

@AliYmn
Copy link
Author

AliYmn commented Aug 8, 2017

root@ubuntu-512mb-nyc3-01:~$ lsof -i
COMMAND     PID USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
sshd       1527 root    3u  IPv4  15779      0t0  TCP *:ssh (LISTEN)
sshd       1527 root    4u  IPv6  15788      0t0  TCP *:ssh (LISTEN)
VBoxHeadl 15644 root   22u  IPv4  37266      0t0  TCP localhost:2222 (LISTEN)
sshd      18809 root    3u  IPv4  42637      0t0  TCP 104.131.172.65:ssh->78.187.60.13.dynamic.ttnet.com.tr:63690 (ESTABLISHED)
redis-ser 25193 root    4u  IPv6  56627      0t0  TCP *:6380 (LISTEN)
redis-ser 25193 root    5u  IPv4  56628      0t0  TCP *:6380 (LISTEN)
kubectl   31904 root    3u  IPv4  89722      0t0  TCP localhost:8001 (LISTEN)

@xiangpengzhao
Copy link
Contributor

FYI: https://kubernetes.io/docs/setup/pick-right-solution/

@xiangpengzhao
Copy link
Contributor

/sig cluster-lifecycle

@k8s-ci-robot k8s-ci-robot added the sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. label Aug 9, 2017
@k8s-github-robot k8s-github-robot removed the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Aug 9, 2017
@joshualevy2
Copy link

I had this problem because there was no admin.conf file and I did not have KUBECONFIG=/root/admin.conf set. the admin.conf file is created in /etc/kubernetes by the "kubeadmin init" command and you need to copy it to all your minion nodes. kubeadmin does not do this for you.

@gkatsanos
Copy link

gkatsanos commented Nov 28, 2017

~/D/p/i/server (master|✔) $ kubectl create -f wtf.yml                                                 16:34:07
W1128 16:34:09.944864   27487 factory_object_mapping.go:423] Failed to download OpenAPI (Get http://localhost:8080/swagger-2.0.0.pb-v1: dial tcp [::1]:8080: getsockopt: connection refused), falling back to swagger
The connection to the server localhost:8080 was refused - did you specify the right host or port?```

```yaml
~/D/p/i/server (master|✔) $ cat wtf.yml                                                               16:34:10
apiVersion: v1
kind: Pod
metadata:
  name: myserver
  labels:
    purpose: demonstrate-envars
spec:
  containers:
  - name: myserver
    image: gkatsanos/server
    env:
    - name: JWT_EXPIRATION_MINUTES
      value: "1140"
    - name: JWT_SECRET
      value: "XXX"
    - name: MONGO_URI
      value: "mongodb://mongodb:27017/isawyou"
    - name: CLIENT_URI
      value: "//localhost:8080/"
    - name: MONGO_URI_TESTS
      value: "mongodb://mongodb:27017/isawyou-test"
    - name: PORT
      value: "3000"
~/D/p/i/server (master|✔) $ kubectl version                                                           16:35:00
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.3", GitCommit:"f0efb3cb883751c5ffdbe6d515f3cb4fbe7b7acd", GitTreeState:"clean", BuildDate:"2017-11-08T18:39:33Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"darwin/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?

@didd
Copy link

didd commented Feb 1, 2018

On my case this is happening due to a failing kubelet service('service kubelet status') and I had to do 'swapoff -a' to disable paging and swapping which fixed the problem. You can read about the "why" here.

@MulticsYin
Copy link

Maybe you not set environment variables, try this:
export KUBERNETES_MASTER=http://MasterIP:8080
MasterIP was you kubernetes master IP

@clenk
Copy link

clenk commented Mar 2, 2018

I had this problem because I was running kubectl as the wrong user. I had copied /etc/kubernetes/admin.conf to .kube/config in one user's home directory and needed to run kubectl as that user.

@moqichenle
Copy link

moqichenle commented Mar 27, 2018

Run these commands solved this issue:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

After running them, kubectl is working.

@fengerzh
Copy link

I don't understand, why this command must be run by normal user but not root user?

@Sam-Fireman
Copy link

Run these commands solved this issue:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

After running them, kubectl is working.

@prabhakarsultane
Copy link

prabhakarsultane commented Oct 11, 2018

There is a configuration issue, if you have setup kubernetes using root and trying to execute kubectl command from the different user then this error will occur.
To resolved this issue run simply below command
root@devops:~# cp -r .kube/ /home/ubuntu/

root@devops:~# chown -R ubuntu:ubuntu /home/ubuntu/.kube

root@devops:~# su ubuntu

root@devops:~# kubectl get pod -o wide

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
cron 1/1 Running 0 2h 10.244.0.97 devops

@helloworlde
Copy link

Run these commands solved this issue:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

After running them, kubectl is working.

I tried by this solution on Ubuntu 18.04, but it still not work. At last I found it caused by Swap! So I fixed by disable swap like this:

sudo swapoff -a
sudo chown $(id -u):$(id -g) $HOME/.kube/config

@neolit123
Copy link
Member

please try tools like kops or kubeadm that will handle all the setup for you.
they also print instructions in the terminal on how to setup admin.conf or pod-network-plugins.

closing this issue.
for similar questions try stackoverflow:
https://github.com/kubernetes/community/blob/master/contributors/guide/issue-triage.md#user-support-response-example

/close

@k8s-ci-robot
Copy link
Contributor

@neolit123: Closing this issue.

In response to this:

please try tools like kops or kubeadm that will handle all the setup for you.
they also print instructions in the terminal on how to setup admin.conf or pod-network-plugins.

closing this issue.
for similar questions try stackoverflow:
https://github.com/kubernetes/community/blob/master/contributors/guide/issue-triage.md#user-support-response-example

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@Oyunbold
Copy link

Oyunbold commented Nov 2, 2018

kubectl config set-cluster demo-cluster --server=http://localhost:8001

@jvleminc
Copy link

jvleminc commented Jan 10, 2019

Run these commands solved this issue:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

After running them, kubectl is working.

I fixed it through similar commands:
kubernetes-sigs/kubespray#1615 (comment)

@HiMyFriend
Copy link

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Mission complete!

@azoaib
Copy link

azoaib commented Apr 16, 2019

I am using docker-for-mac, got the same issue but restarting docker daemon solved the issue.

@soromamadou
Copy link

hello,
be sure to not run your command as root. You need to use user account

@avaslev
Copy link

avaslev commented Apr 19, 2019

If after running sudo cp /etc/kubernetes/admin.conf $HOME/ && sudo chown $(id -u):$(id -g) $HOME/admin.conf
Command kubectl config view display like this:

apiVersion: v1
clusters: []
contexts: []
current-context: ""
kind: Config
preferences: {}
users: []

Running this command unset KUBECONFIG solved it.

@subeeshvasu
Copy link

subeeshvasu commented Sep 3, 2019

Run these commands solved this issue:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

After running them, kubectl is working.

For me kubectl didn't work with the above commands. However, I could make it work after running the following export command in addition to the above commands.
export KUBECONFIG=$HOME/.kube/config

Just to be clear, what worked for me is the following sequence.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=$HOME/.kube/config

@bogere
Copy link

bogere commented Sep 4, 2019

sometimes especially if you on Mac OS, just enable kubernetes on your Docker desktop for Mac.
Ensure that it is running , that is what i did to resolve the above error.
Screenshot 2019-09-04 at 9 55 13 AM

@Anuradha677
Copy link

Anuradha677 commented Oct 3, 2019

I have found same issue. I ran the below command.
gcloud container clusters get-credentials micro-cluster --zone us-central1-a
The issue is resolved.

@p8ul
Copy link

p8ul commented Oct 11, 2019

I experience this error after switching between projects & login. I solved the issue by running this command

gcloud container clusters get-credentials --region your-region gke-us-east1-01

REF

@aescobar-icc
Copy link

thanks @p8ul, That solved my issue.

@zhangdavids
Copy link

Run these commands solved this issue:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

After running them, kubectl is working.

solved 👏

@nightswimmings
Copy link

nightswimmings commented Nov 25, 2019

This happened to me because of my .kube/config file having wrong indentations (due to manual editing)

@tnduy27
Copy link

tnduy27 commented Dec 1, 2019

I have the same problem and resolved completely. If you are using Ubuntu OS, please follow steps:

  1. remove kubernetes if any: https://stackoverflow.com/questions/44884322/how-to-remove-kubectl-from-ubuntu-16-04-lts
  2. Follow those steps in this: https://ubuntu.com/kubernetes/install

Thanks

@manishalankala
Copy link

manishalankala commented Jan 28, 2020

I had this problem because there was no admin.conf file and I did not have KUBECONFIG=/root/admin.conf set. the admin.conf file is created in /etc/kubernetes by the "kubeadmin init" command and you need to copy it to all your minion nodes. kubeadmin does not do this for you.

what was the solution to this??

@seyfbarhoumi
Copy link

Run these commands solved this issue:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

After running them, kubectl is working.

Worked for me thanks

@ghost
Copy link

ghost commented Feb 25, 2020

The connection to the server localhost:8080 was refused - did you specify the right host or port?
This command I have to use in Master or Node because I'm getting error in Node
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

@ghost
Copy link

ghost commented Mar 26, 2020

cp: cannot stat '/etc/kubernetes/admin.conf': No such file or directory

@codebyalokgupta
Copy link

If anyone using Docker Desktop on Mac then go to docker desktop preferences and enable Kubernetes. It is not enabled by default. It should show Kubernetes running, after that this should be resolved.

Screenshot 2020-05-07 at 10 52 03 pm

@hashimyousaf
Copy link

If anyone using Docker Desktop on Mac then go to docker desktop preferences and enable Kubernetes. It is not enabled by default. It should show Kubernetes running, after that this should be resolved.

Screenshot 2020-05-07 at 10 52 03 pm

Thanks, it solved my problem. :)

@bng-github
Copy link

bng-github commented May 18, 2020

  1. systemctl status kubelet -> it should be runnning state

  2. kubeadm reset -> reset the kubeadm using this command

  3. Now RUN “kubectl get pods” -> will get the pods

@moshinde
Copy link

I faced this issue when I installed kubectl with root and initialized kubernetes cluster with different user.
Using the same user resolved this issue

@ica10888
Copy link
Contributor

ica10888 commented Jul 2, 2020

Maybe there is a reason what caused this:
such like in some container, missing environment variable.
could excute follows command to set the environment variable

export KUBECONFIG=/etc/kubernetes/admin.conf

/etc/kubernetes/admin.conf is volumes on the node master's same path.

@felipeschossler
Copy link

I had this problem because there was no admin.conf file and I did not have KUBECONFIG=/root/admin.conf set. the admin.conf file is created in /etc/kubernetes by the "kubeadmin init" command and you need to copy it to all your minion nodes. kubeadmin does not do this for you.

I love you, so simple as that! ❤️

@ibrahiminui
Copy link

If you're using EKS, the error is due to the fact that kubectl isn't working yet. To do this, you need to use the command below

aws eks --region {region} update-kubeconfig --name {}cluster name}

@saronavee
Copy link

Run these commands solved this issue:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

After running them, kubectl is working.

Thank you. it works well 💯

@tylerlittlefield
Copy link

For anyone who had this working but then it randomly stopped, I noticed that my environment variable was no longer set, below is how I resolved:

# check if env is set
echo $KUBECONFIG

# if it returns nothing, set env
export KUBECONFIG=~/.kube/<name of your config file>

# if you don't have that file to begin with, you might try copying it from master node
scp <user>@<master ip>:~/.kube/config ~/.kube/<name you want to give to config file>

@jamesfranklinnetsec
Copy link

I had this problem because there was no admin.conf file and I did not have KUBECONFIG=/root/admin.conf set. the admin.conf file is created in /etc/kubernetes by the "kubeadmin init" command and you need to copy it to all your minion nodes. kubeadmin does not do this for you.

thanks this was my problem cheerio

@Hawaiideveloper
Copy link

Resolved:

"The connection to the server localhost:8080 was refused - did you specify the right host or port?"

I suggest clicking the link here

@Yavdhesh
Copy link

For me I accidently installed kubectl and microk8s both. So, I uninstalled kubectl.
microk8s.kubectl version <---- works fine now

@xyzkpz
Copy link

xyzkpz commented Jul 3, 2021

if you are getting like this goto this url
https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/
it will fix your errors...
enjoy everyone

@preciousonyekwere
Copy link

Run these commands solved this issue: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config

After running them, kubectl is working.

run this command as ubuntu user NOT root user if you have issues deploying on k8S

@Tebeye
Copy link

Tebeye commented Feb 1, 2022

Hello, I am getting same error even I copied the configuration file from /etc/kubernetes/admin.conf
Even I tried export KUBERNETES_MASTER=http://MasterIP:8080
Does anyone know another alternative to solve this problem?

@KalinIvanov-l
Copy link

If anyone using Docker Desktop on Mac then go to docker desktop preferences and enable Kubernetes. It is not enabled by default. It should show Kubernetes running, after that this should be resolved.

Screenshot 2020-05-07 at 10 52 03 pm

thanks, solved!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle.
Projects
None yet
Development

No branches or pull requests