Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Access problem after minikube deployed ingress #17288

Closed
pptfz opened this issue Sep 21, 2023 · 9 comments
Closed

Access problem after minikube deployed ingress #17288

pptfz opened this issue Sep 21, 2023 · 9 comments
Labels
addon/ingress co/docker-driver Issues related to kubernetes in container kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@pptfz
Copy link

pptfz commented Sep 21, 2023

What Happened?

Installation environment

mac version: Ventura 13.4.1

docker version: docker desktop 4.22.1(118664)

minikube version
minikube version: v1.31.2
commit: fd7ecd9c4599bef9f04c0986c4a0187f98a4396e

install minikube

curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-amd64
sudo install minikube-darwin-amd64 /usr/local/bin/minikube

install ingress

minikube addons enable ingress
$ kubectl get all -n ingress-nginx
NAME                                            READY   STATUS      RESTARTS      AGE
pod/ingress-nginx-admission-create-699jj        0/1     Completed   0             119m
pod/ingress-nginx-admission-patch-js78v         0/1     Completed   0             119m
pod/ingress-nginx-controller-7799c6795f-rn6fx   1/1     Running     3 (10m ago)   119m

NAME                                         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
service/ingress-nginx-controller             NodePort    10.106.79.188   <none>        80:31835/TCP,443:31196/TCP   120m
service/ingress-nginx-controller-admission   ClusterIP   10.107.44.180   <none>        443/TCP                      120m

NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/ingress-nginx-controller   1/1     1            1           120m

NAME                                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/ingress-nginx-controller-7799c6795f   1         1         1       119m

NAME                                       COMPLETIONS   DURATION   AGE
job.batch/ingress-nginx-admission-create   1/1           58s        120m
job.batch/ingress-nginx-admission-patch    1/1           57s        119m

Installation test examples

kubectl apply -f https://storage.googleapis.com/minikube-site-examples/ingress-example.yaml

$ kubectl get all -n test         
NAME          READY   STATUS    RESTARTS   AGE
pod/bar-app   1/1     Running   0          10m
pod/foo-app   1/1     Running   0          10m

NAME                  TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/bar-service   ClusterIP   10.106.76.205    <none>        8080/TCP   10m
service/foo-service   ClusterIP   10.109.196.145   <none>        8080/TCP   10m

$ kubectl get ingress -n test
NAME              CLASS   HOSTS   ADDRESS        PORTS   AGE
example-ingress   nginx   *       192.168.49.2   80      11m

problem1

The host cannot telnet through port 80/443 of svc Ingress-nginx-Controller of namespace Nodeport of ingress-nginx
$ telnet 127.0.0.1 80
Trying 127.0.0.1...
telnet: connect to address 127.0.0.1: Connection refused
telnet: Unable to connect to remote host

problem2

The ingress example installed in the test namespace cannot be pinged. Procedur
$ ping 192.168.49.2 
PING 192.168.49.2 (192.168.49.2): 56 data bytes
Request timeout for icmp_seq 0
Request timeout for icmp_seq 1
Request timeout for icmp_seq 2
^C
--- 192.168.49.2 ping statistics ---
4 packets transmitted, 0 packets received, 100.0% packet loss

The service cannot be accessed through the following example in the official documentation

$ curl <ip_from_above>/foo
Request served by foo-app
...

$ curl <ip_from_above>/bar
Request served by bar-app
...

The firewall of my mac is turned off, and there are no other restrictions. How can I troubleshoot this problem

Attach the log file

log.txt

Operating System

macOS (Default)

Driver

Docker

@spowelljr
Copy link
Member

Hi @pptfz, the problem is Docker Desktop has it's own networking and it prevents you directly reaching a pod from your host without some extra work.

You might have skipped the part in the instructions where it says:

Note for Docker Desktop Users:
To get ingress to work you’ll need to open a new terminal window and run `minikube tunnel` and in the following
step use `127.0.0.1` in place of `<ip_from_above>`.

Run minikube tunnel in another tab and then try curl 127.0.0.1/foo, the example works for me on macOS with Docker Desktop.

$ curl 127.0.0.1/foo
Request served by foo-app

HTTP/1.1 GET /foo

Host: 127.0.0.1
Accept: */*
User-Agent: curl/8.1.2
X-Forwarded-For: 10.244.0.1
X-Forwarded-Host: 127.0.0.1
X-Forwarded-Port: 80
X-Forwarded-Proto: http
X-Forwarded-Scheme: http
X-Real-Ip: 10.244.0.1
X-Request-Id: 43f7b119432a17aae93dca91cc476e19
X-Scheme: http

@spowelljr spowelljr added kind/support Categorizes issue or PR as a support question. addon/ingress co/docker-driver Issues related to kubernetes in container labels Sep 27, 2023
@pptfz
Copy link
Author

pptfz commented Oct 7, 2023

Thank you very much for your reply.

@pptfz
Copy link
Author

pptfz commented Oct 8, 2023

@spowelljr

In accordance with https://minikube.sigs.k8s.io/docs/handbook/addons/ingress-dns/ Controls

Create a file in /etc/resolver/minikube-test with the following contents.

$ cat /etc/resolver/minikube-test
domain test
nameserver 192.168.49.2
search_order 1
timeout 5

192.168.49.2 Is the ip I saw after deploying the test sample

$ kubectl get ingress
NAME              CLASS   HOSTS   ADDRESS        PORTS   AGE
example-ingress   nginx   *       192.168.49.2   80      23h

Configure in-cluster DNS server to resolve local DNS names inside cluster

$ kubectl get configmap coredns -n kube-system
NAME      DATA   AGE
coredns   1      16d
➜  ~ kubectl get configmap coredns -n kube-system -o yaml
apiVersion: v1
data:
  Corefile: |
    .:53 {
        log
        errors
        health {
           lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
           pods insecure
           fallthrough in-addr.arpa ip6.arpa
           ttl 30
        }
        prometheus :9153
        hosts {
           192.168.65.254 host.minikube.internal
           fallthrough
        }
        forward . /etc/resolv.conf {
           max_concurrent 1000
        }
        cache 30
        loop
        reload
        loadbalance
    }
    test:53 {
            errors
            cache 30
            forward . 192.168.49.2
    }
kind: ConfigMap
metadata:
  creationTimestamp: "2023-09-21T07:50:00Z"
  name: coredns
  namespace: kube-system
  resourceVersion: "15763"
  uid: 127c1eb5-fde6-47d4-89eb-cb85f78d4b0e

Then deploy the test sample

kubectl apply -f https://raw.githubusercontent.com/kubernetes/minikube/master/deploy/addons/ingress-dns/example/example.yaml

But you can't parse it. What's the problem

$ nslookup hello-john.test 192.168.49.2
;; connection timed out; no servers could be reached

@n0ne
Copy link

n0ne commented Nov 18, 2023

@pptfz did you solve this?

@pptfz
Copy link
Author

pptfz commented Nov 22, 2023

@n0ne The problem isn't solved. I've switched to kind

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 20, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 21, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale May 15, 2024
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
addon/ingress co/docker-driver Issues related to kubernetes in container kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants