Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Forbidden error when retrieving logs from non-master node's pods #211

Closed
crashburn65 opened this issue Mar 28, 2017 · 25 comments
Closed

Forbidden error when retrieving logs from non-master node's pods #211

crashburn65 opened this issue Mar 28, 2017 · 25 comments
Assignees
Labels
documentation/content-gap priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@crashburn65
Copy link

What keywords did you search in kubeadm issues before filing this one?

kubectl logs
logs forbidden curl insecure

Is this a BUG REPORT or FEATURE REQUEST?

BUG REPORT

Versions

kubeadm version (use kubeadm version):

kubeadm version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.0-alpha.0.2074+a092d8e0f95f52", GitCommit:"a092d8e0f95f5200f7ae2cba45c75ab42da36537", GitTreeState:"clean", BuildDate:"2016-12-13T17:03:18Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.5", GitCommit:"894ff23729bbc0055907dd3a496afb725396adda", GitTreeState:"clean", BuildDate:"2017-03-23T16:14:24Z", GoVersion:"go1.8", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.4", GitCommit:"7243c69eb523aa4377bce883e7c0dd76b84709a1", GitTreeState:"clean", BuildDate:"2017-03-07T23:34:32Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}

Kubernetes cluster consists of a single master node and minion node, joined together by kubeadm.

What happened?

From a remote machine (that is not the master or minion), when doing a kubectl logs on any pods that lives on the minion node, the following error occurs:

Error from server: Get https://<minion_ip>:10250/containerLogs/default/critics-1347287238-wdssk/critics: Forbidden

When doing a kubectl logs on any of the pods that lives on the master node, no error occurs and logs can be retrieved as expected.

When doing a curl of the URL returned in the error above with a --insecure, I am able to pull the logs from the affected node.

What you expected to happen?

Should be able to retrieve logs of a pod from a non-master node.

Anything else we need to know?

@liggitt
Copy link
Member

liggitt commented Apr 3, 2017

I suspect the minion is not being given serving certs the master apiserver trusts and is simply generating its own

@gousse
Copy link

gousse commented May 10, 2017

same issue here
cluster installation done with kubeadm

$ kubeadm version

kubeadm version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.2", GitCommit:"477efc3cbe6a7effca06bd1452fa356e2201e1ee", GitTreeState:"clean", BuildDate:"2017-04-19T20:22:08Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}

$ kubectl version

Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.2", GitCommit:"477efc3cbe6a7effca06bd1452fa356e2201e1ee", GitTreeState:"clean", BuildDate:"2017-04-19T20:33:11Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.2", GitCommit:"477efc3cbe6a7effca06bd1452fa356e2201e1ee", GitTreeState:"clean", BuildDate:"2017-04-19T20:22:08Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}

list pods runnings:

ubuntu@master-01:~$ kubectl get pods -n kube-system -o wide
NAME                                READY     STATUS    RESTARTS   AGE       IP            NODE
etcd-master-01                      1/1       Running   4          6d        10.100.0.98   master-01
kube-apiserver-master-01            1/1       Running   4          6d        10.100.0.98   master-01
kube-controller-manager-master-01   1/1       Running   5          6d        10.100.0.98   master-01
kube-dns-3913472980-tx9gk           3/3       Running   9          6d        10.44.0.1     master-01
kube-proxy-5lfr4                    1/1       Running   3          6d        10.100.0.91   node-06
kube-proxy-7gk91                    1/1       Running   3          6d        10.100.0.94   node-03
kube-proxy-7kkd3                    1/1       Running   3          6d        10.100.0.93   node-01
kube-proxy-994v3                    1/1       Running   3          6d        10.100.0.95   node-05
kube-proxy-bbmkp                    1/1       Running   3          6d        10.100.0.97   node-02
kube-proxy-g593h                    1/1       Running   4          6d        10.100.0.98   master-01
kube-proxy-lft8f                    1/1       Running   3          6d        10.100.0.96   node-04
kube-scheduler-master-01            1/1       Running   4          6d        10.100.0.98   master-01
weave-net-1948p                     2/2       Running   9          6d        10.100.0.91   node-06
weave-net-2632r                     2/2       Running   9          6d        10.100.0.93   node-01
weave-net-394xl                     2/2       Running   9          6d        10.100.0.94   node-03
weave-net-ffl0r                     2/2       Running   9          6d        10.100.0.96   node-04
weave-net-j1d9d                     2/2       Running   9          6d        10.100.0.95   node-05
weave-net-lcf3c                     2/2       Running   11         6d        10.100.0.97   node-02
weave-net-pmss7                     2/2       Running   13         6d        10.100.0.98   master-01

Logs from a pod running on the master:

ubuntu@master-01:~$ kubectl -n kube-system logs kube-proxy-g593h
I0510 07:40:35.618186       1 server.go:225] Using iptables Proxier.
W0510 07:40:35.695824       1 server.go:469] Failed to retrieve node info: User "system:serviceaccount:kube-system:kube-proxy" cannot get nodes at the cluster scope. (get nodes master-01)
W0510 07:40:35.696192       1 proxier.go:293] invalid nodeIP, initializing kube-proxy with 127.0.0.1 as nodeIP
W0510 07:40:35.696260       1 proxier.go:298] clusterCIDR not specified, unable to distinguish between internal and external traffic
I0510 07:40:35.696312       1 server.go:249] Tearing down userspace rules.
E0510 07:40:35.731250       1 reflector.go:201] k8s.io/kubernetes/pkg/proxy/config/api.go:49: Failed to list *api.Endpoints: User "system:serviceaccount:kube-system:kube-proxy" cannot list endpoints at the cluster scope. (get endpoints)
E0510 07:40:35.731407       1 reflector.go:201] k8s.io/kubernetes/pkg/proxy/config/api.go:46: Failed to list *api.Service: User "system:serviceaccount:kube-system:kube-proxy" cannot list services at the cluster scope. (get services)
E0510 07:40:36.749827       1 reflector.go:201] k8s.io/kubernetes/pkg/proxy/config/api.go:49: Failed to list *api.Endpoints: User "system:serviceaccount:kube-system:kube-proxy" cannot list endpoints at the cluster scope. (get endpoints)
E0510 07:40:36.751095       1 reflector.go:201] k8s.io/kubernetes/pkg/proxy/config/api.go:46: Failed to list *api.Service: User "system:serviceaccount:kube-system:kube-proxy" cannot list services at the cluster scope. (get services)
I0510 07:40:37.829246       1 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0510 07:40:37.829987       1 conntrack.go:66] Setting conntrack hashsize to 32768
I0510 07:40:37.830401       1 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0510 07:40:37.830423       1 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600

try to get logs for a pod running on a node

ubuntu@master-01:~$ kubectl logs -n kube-system kube-proxy-lft8f
Error from server: Get https://10.100.0.96:10250/containerLogs/kube-system/kube-proxy-lft8f/kube-proxy: Forbidden

@gousse
Copy link

gousse commented May 11, 2017

I found the reason.
Its the no_proxy that must be set to include all nodes ip
otherwise it try to use the proxy, and thats the proxy which answer Forbiden

@luxas
Copy link
Member

luxas commented May 12, 2017

@gousse Could you document that on the kubeadm reference page, please?

@tomdee
Copy link

tomdee commented May 16, 2017

I'm hitting this at the moment - a work around would be great!

@tomdee
Copy link

tomdee commented May 17, 2017

I spent a while trying to use no_proxy both with * and with the IP addresses of all the nodes, but it still did not resolve the problem. Any specific guidance would be really useful

@luxas luxas added documentation/content-gap priority/backlog Higher priority than priority/awaiting-more-evidence. labels May 29, 2017
@jamiehannaford
Copy link
Contributor

@gousse So setting export NO_PROXY=$no_proxy,<node1-ip>,<node2-ip>,... solved the issue for you?

@yanhongwang
Copy link

@jamiehannaford @tomdee
Yes, in my case "no_proxy" should be made before k8s cluster is setup.
And the forbidden error was solved.

@jamiehannaford jamiehannaford self-assigned this Oct 10, 2017
@erkules
Copy link

erkules commented Oct 15, 2017

Which components are involved doing the kubectl logs command? So does the master nodes need to have the worker nodes in there no_proxy only? Does master node means the api_server or any other controller?

@liggitt
Copy link
Member

liggitt commented Oct 15, 2017

kubectl > apiserver > node hosting the pod

@erkules
Copy link

erkules commented Oct 15, 2017

thx

@erkules
Copy link

erkules commented Oct 15, 2017

Is there a way to make sure kubectl logs goes for DNS instead of IPs? Autoscaling and IPs don't work well.

@liggitt
Copy link
Member

liggitt commented Oct 15, 2017

Nodes report their network addresses in their Node API object status.

The apiserver contacts nodes using the preferred address type as determined by the --kubelet-preferred-address-types flag:

List of the preferred NodeAddressTypes to use for kubelet connections. (default [Hostname,InternalDNS,InternalIP,ExternalDNS,ExternalIP])

@liggitt
Copy link
Member

liggitt commented Oct 15, 2017

Not all kubelet cloud providers report dns addresses currently.

@erkules
Copy link

erkules commented Oct 15, 2017

\o/ Awesome. Saved my day thx!
Worked great in AWS.

@luxas
Copy link
Member

luxas commented Oct 20, 2017

@jamiehannaford you're working in the troubleshooting doc. Could you add this to the list?
(if kubectl logs don't work, check the proxy settings)

@Snipes999
Copy link

@yanhongwang I'm hitting the same proxy issue. My cluster runs well so far but i can't retrieve logs. The no_proxy ip's are set. Do I really need to recreate my cluster? Or is there any other way to get this running?

@yanhongwang
Copy link

Hi @Snipes999

My environment:
Ubuntu: 16.04 LTS
Kubernetes: 1.7.8-00
Deployment: Ansible

http_proxy, https_proxy was set by some default value in my network environment.

So I add master ip and minion ip to "no_proxy" environment variable to all kubernetes cluster machine.
And then all the machines can talk to each other without passing through proxy server in case proxy server block some kubernetes port.

Because I don't know what exactly "kubeadm init" done with system.
So I destroy the machine and add "no_proxy" before "kubeadm init".

I use Ansible to deploy machine automatically. So it is not difficult in my case.

Otherwise, probably you can do "kubeadm reset". And then try again.

Hope this can help.

Hong

@Snipes999
Copy link

I'm using Ubuntu 17.04 and Kubernetes 1.8.1
seems to work now. I tried a couple of things, but It think the resolution was to change the no_proxy settings in the yaml files (/etc/kubernetes/manifests/...yaml) to the same as listed in current environment settings in /etc/environment

@luxas
Copy link
Member

luxas commented Oct 27, 2017

@Snipes999 I'll close this issue as solved then. Thank you!

@luxas luxas closed this as completed Oct 27, 2017
@tomdee
Copy link

tomdee commented Oct 31, 2017

@luxas I don't think this is solved. Unless I'm not understanding this correctly...

  1. A user creates a kubernetes cluster with kubeadm
  2. At some point they try to use kubectl logs ...
  3. They find it doesn't work and if they are lucky they find this issue or some troubleshooting doc with advice on needing to manually edit some files and then destroy and recreate their cluster!

Shouldn't this be actually fixed so that kubectl logs just works?

@luxas
Copy link
Member

luxas commented Nov 1, 2017

That can only happen under certain conditions when you're behind proxies.
The umbrella issue for making detection of front proxies better in k8s/kubeadm is #324, and @kad is owning that area. I think it gets better all the time.

@kad
Copy link
Member

kad commented Nov 1, 2017

@tomdee I'm constantly hitting issues where something doesn't work if person is in isolated network behind proxies, and trying to fix as much as I can. We have several patches that are already merged into 1.9 and some even backported to 1.8.x to get it better. Some PRs are still under review, but hopefully will soon be merged in 1.9. If you hit something, please feel free to open issue and assign to me or CC me.

@tomdee
Copy link

tomdee commented Nov 8, 2017

@luxas @kad Thanks for the replies. I think I'm always hitting this with kubeadm running under vagrant. I don't think a proxy is being used so maybe I'm hitting a different issue?

@kad
Copy link
Member

kad commented Nov 8, 2017

@tomdee open support issue with details about your environment (Vagrant file, network connectivity, distro, vagrant plugins installed, etc). we will see what might be an issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation/content-gap priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

10 participants