New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running Kubernetes Locally via Docker - kubectl get nodes
returns The connection to the server localhost:8080 was refused - did you specify the right host or port?
#23726
Comments
Have the same issue with version 1.2.0 +1 |
@xificurC @jankoprowski Have you checked whether the apiserver is running? Please take a look at our troubleshooting guide: If you still need help, please ask on stackoverflow. |
apiserver failed with:
|
I also met that problem and my apiserver is not failed,all the process(apiserver,controller-manager,schdeuler,kublet and kube-proxy) runinng normally. My docker version is 1.11.2,if anyone knows how to resolve this problem? |
I have met this problems too. Since I need to use Kubernetes1.2.2, I use docker to deploy the kubernetes. The same problem happens. The apiserver is down. Logs here,
The apiserver is failed and I cannot deploy Kubernetes. Does anyone know about it? |
Try using --server to specify your master: |
Hello I'm getting the following error on Centos 7, how can solve this issue?
|
You can solve this with "kubectl config":
|
In my case I had just to remove |
Hi, if I config KUBE_API_ADDRESS with below value if I config KUBE_API_ADDRESS with below value |
I was trying to get status from remote system using ansible and I was facing same issue. |
Similar to @sumitkau, I solved my problem with setting new kubelet config location using: |
update the entry in /etc/kubernetes/apiserver ( on master server) |
If this happens in GCP, the below most likely will resolve the issue:
|
Thanks to @mamirkhani. I solved this error. I think this is the recommended solution. |
I had the same problem. When creating cluster via web gui in google cloud and trying to run kubectl I get
everything you have to do is fetch kubectl config for your cluser which will be stored in $HOME/.kubectl/config:
Now kubectl works just fine |
kubectl is expecting ~/.kube/config as the filename for its configuration. The quick fix that worked for me was to create a symbolic link:
N.B. This was for a "conjure-up kubernetes" deployment. |
This issue has been confused me for 1 week, it seems to be working for me now. If you have this issue, first of all, you need to know which node it happens on. If it is a master node, then make sure all of kubernetes pods are running by command mine looks like this if it does not, then verify if you have those files in your /etc/kubernetes/ directory, and then see if kubectl version works or not, if it still does not work, then follow the tutorial at https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/ and tear down your cluster and rebuilt your master. If it happens on (slave) nodes, then make sure if you have the files after login as normal user on (slave) node, you probably wont see a config file in your ~/.kube, then create this folder then copy admin.conf from your master node into your ~/.kube/ directory on this (slave) node as config with a normal user, and then do the copy and try kubectl version, it works for me. |
While I know that there might be multiple reasons for failure here, in my case removing |
I have this issues. This solution work for me: sudo cp /etc/kubernetes/admin.conf $HOME/
sudo chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf If you don't have rm -rf ~/.kube/cache |
You need to switch context. |
HI Team, we need to install sap vora, for that kubernetes and Docker are prerequisites. we have installed kubernetes master and kubectl, docker . but when we are checking kubectl cluster-info #kubectl cluster-info dump when we checked systemctl status kubelet -l kubelet.service - Kubernetes Kubelet Server we have performed below settings sudo cp /etc/kubernetes/admin.conf $HOME/ but no use. can anyone help regards |
删除minikube虚机及配置文件,重新安装minikube(v0.25.2),其他版本可能会有坑
|
Use below command. It worked for me. mkdir -p $HOME/.kube |
Thanks! this worked! |
In my case, I had rebooted the master node of kubernetes, and when restarting, the SWAP partition of memory exchange is enabled by default
kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf, 90-local-extras.conf
Active: activating (auto-restart) (Result: exit-code) since 금 2018-04-20 15:27:00 KST; 6s ago
Docs: http://kubernetes.io/docs/
Process: 17247 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CGROUP_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=255)
Main PID: 17247 (code=exited, status=255)
Filename type size Used priority
/dev/sda6 partition 950267 3580 -1
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Mon 2019-01-14 08:28:56 -05; 15min ago
Docs: https://kubernetes.io/docs/home/
Main PID: 7018 (kubelet)
Tasks: 25 (limit: 3319)
CGroup: /system.slice/kubelet.service
└─7018 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes
NAME STATUS ROLES AGE VERSION
k8smaster Ready master 47h v1.13.2
k8snode1 Ready <none> 45h v1.13.2
k8snode2 Ready <none> 45h v1.13.2
|
I didn't run this. mkdir -p $HOME/.kube caused the problem. |
ip route add default via xxx.xxx.xxx.xxx on k8s master |
$ kubectl apply -f Deployment.yaml |
well, it may sound stupid, but maybe you didn`t install miniKube to run your cluster locally |
try reinstall minikube if you have one or try using |
Make sure it removes all the containers
After you make sure all the containers have been removed, restart kubelet
|
[mayuchau@cg-id .kube]$ kubectl get nodes I am getting above error. I tried above mentioned solutions but it didn't work for me. |
Issue Resolved after verifying permissions of /var/run/docker.sock in master node |
Here is how I resolved it:
After a successful run of this command you would be able to run: |
Thanks!!! E.g.: |
one possible cause of this problem is, the check with:
and if there is no
|
I faced similar issue which was resolved with |
In Mac OS : I am Running Kubernetes Locally via Docker ,to be specific https://k3d.io/- So post installation, once the cluster is created ,if i execute the command PS: Docker , docker-machine installed via Homebrew |
What does |
navkmurthy$ k3d cluster create -p 5432:30080@agent[0] -p 9082:30081@agent[0] --agents 3 --update-default-kubeconfig To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. |
@paulmwatsonnavkmurthy$ kubectl config get-contexts
navkmurthy$ |
In my case, I haven't run |
I did a I then ran NAME READY STATUS RESTARTS AGE |
This answer works for me cause the machine needs to know where the master(admin) is, not localhost |
My issue happened on RHEL and it turned out that my Docker daemon was inactive. How to fix this issue: |
Had the same problem, my setup has 3 nodes (1 control and 2 workers). [asd1@kubevm-worker1 ~]$ kubectl get nodes Solved it by:
After this: [asd1@kubevm-worker2 ~]$ kubectl get nodes [asd1@kubevm-worker1 .kube]$ kubectl get nodes |
Going through this guide to set up kubernetes locally via docker I end up with the error message as stated above.
Steps taken:
export K8S_VERSION='1.3.0-alpha.1'
(tried 1.2.0 as well)docker run
commandkubectl
binary and put in onPATH
(which kubectl
works)kubectl get nodes
In short, no magic. I am running this locally on Ubuntu 14.04, docker 1.10.3. If you need more information let me know
The text was updated successfully, but these errors were encountered: