Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to enable ingress addon in minikube on ubuntu with docker driver #17699

Closed
alkuma opened this issue Nov 30, 2023 · 5 comments
Closed

Unable to enable ingress addon in minikube on ubuntu with docker driver #17699

alkuma opened this issue Nov 30, 2023 · 5 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@alkuma
Copy link

alkuma commented Nov 30, 2023

What Happened?

I am attempting to access my service hosted on minikube within ubuntu via docker driver from outside the cluster using ingress but it's not working.

First, the details of the OS and minikube

$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 22.04.3 LTS
Release:    22.04
Codename:   jammy
Minikube is running

$ minikube status
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

Minikube is using the docker driver on Linux

$ minikube start
😄  minikube v1.32.0 on Ubuntu 22.04
✨  Using the docker driver based on existing profile
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
🏃  Updating the running docker "minikube" container ...
🐳  Preparing Kubernetes v1.28.3 on Docker 24.0.7 ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
╭──────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                      │
│    Registry addon with docker driver uses port 32780 please use that instead of default port 5000    │
│                                                                                                      │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────╯
📘  For more information see: https://minikube.sigs.k8s.io/docs/drivers/docker
    ▪ Using image docker.io/registry:2.8.3
    ▪ Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
🔎  Verifying registry addon...
🌟  Enabled addons: default-storageclass, storage-provisioner, registry
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

I have a running pod, service, deployment and replicaset

$ kubectl get all
NAME                         READY   STATUS    RESTARTS        AGE
pod/mysvc-54cdf74f4f-kmn44   1/1     Running   17 (6m5s ago)   137m

NAME                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/kubernetes     ClusterIP   10.96.0.1        <none>        443/TCP    3h54m
service/mysvc          ClusterIP   10.107.181.101   <none>        5000/TCP   137m

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/mysvc   1/1     1            1           137m

NAME                               DESIRED   CURRENT   READY   AGE
replicaset.apps/mysvc-54cdf74f4f   1         1         1       137m

Next, we have an ingress definition as follows

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: local-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: mysvc.k2.local
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: mysvc 
            port:
              number: 80

Minikube ip is

$ minikube ip
192.168.49.2

The /etc/hosts entry is made as follows

192.168.49.2 mysvc.k2.local
Therefore the expectation is that http://mysvc.k2.local/ would point to the service.

On starting the tunnel

$ minikube tunnel
✅  Tunnel successfully started

📌  NOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ...

❗  The service/ingress local-ingress requires privileged ports to be exposed: [80 443]
🔑  sudo permission will be asked for it.
🏃  Starting tunnel for service local-ingress.
[sudo] password for osuser: 

and providing the password for sudo access,

the ingress details being

$ kubectl get ingress local-ingress
NAME            CLASS    HOSTS          ADDRESS   PORTS   AGE
local-ingress   <none>   mysvc.k2.local             80      35m
$ kubectl describe ingress local-ingress
Name:             local-ingress
Labels:           <none>
Namespace:        default
Address:          
Ingress Class:    <none>
Default backend:  <default>
Rules:
  Host          Path  Backends
  ----          ----  --------
  mysvc.k2.local  
                /   mysvc:80 ()
Annotations:    nginx.ingress.kubernetes.io/rewrite-target: /
Events:         <none>

However the issue is that the underlying minikube ip is itself not accessible from the host.

$ curl mysvc.k2.local
^C 
$ ping mysvc.k2.local
PING mysvc.k2.local (192.168.49.2) 56(84) bytes of data.
^C
--- mysvc.k2.local ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1032ms

I later discovered that the ingress addons were disabled

$ minikube addons list | grep ingress
| ingress                     | minikube | disabled     | Kubernetes                     |
| ingress-dns                 | minikube | disabled     | minikube                       |

Enabling them leads to an error:

$ minikube addons enable ingress
💡  ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
    ▪ Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
    ▪ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
    ▪ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
🔎  Verifying ingress addon...

❌  Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]

╭───────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                           │
│    😿  If the above advice does not help, please let us know:                             │
│    👉  https://github.com/kubernetes/minikube/issues/new/choose                           │
│                                                                                           │
│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    Please also attach the following file to the GitHub issue:                             │
│    - /tmp/minikube_addons_ee0e2a5ffa0c23cbbbf48fa0fc668431256e65f3_0.log                  │
│                                                                                           │
╰───────────────────────────────────────────────────────────────────────────────────────────╯

Apart from starting the minikube tunnel (which has been done already), is there anything else that needs to be done?

Attaching the logs : minikube_addons_ee0e2a5ffa0c23cbbbf48fa0fc668431256e65f3_0.log

Attach the log file

logs.txt

Operating System

Ubuntu

Driver

Docker

@alkuma
Copy link
Author

alkuma commented Dec 1, 2023

Further update:

I tried to enable ingress-dns addon first.

$ minikube addons enable ingress-dns
💡  ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
    ▪ Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
🌟  The 'ingress-dns' addon is enabled

It worked.

Now, I tried to enable ingress addon as well

$ minikube addons enable ingress
💡  ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
    ▪ Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
    ▪ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
    ▪ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
🔎  Verifying ingress addon...
🌟  The 'ingress' addon is enabled

Surprisingly, that worked too. So now my issue is resolved without actually doing anything other than enabling ingress-dns before ingress

Any reason why this would happen?

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 29, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 30, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Apr 29, 2024
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

3 participants