Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

creatin cluste k3d version 4.4.1 #10

Open
Jean-Baptiste-Lasselle opened this issue Apr 17, 2021 · 7 comments
Open

creatin cluste k3d version 4.4.1 #10

Jean-Baptiste-Lasselle opened this issue Apr 17, 2021 · 7 comments

Comments

@Jean-Baptiste-Lasselle
Copy link

Jean-Baptiste-Lasselle commented Apr 17, 2021

docker network create jbl_network -d bridge
# succesfully created cluster, but failed starting load balancer and third agent : 
k3d cluster create jblCluster --agents 3 --servers 3 --network jbl_network  -p 8080:80@agent[0] -p 8081:80@agent[1] -p 8090:8090@server[0]  -p 8091:8090@server[1] --api-port 0.0.0.0:7888
# k3d cluster create jblCluster --agents 3 --servers 4 --network jbl_network  -p 8080:80@agent[0] -p 8081:80@agent[1] -p 8090:8090@server[0]  -p 8091:8090@server[1] --api-port 0.0.0.0:7888
# failed at starting all servers when nb of servers is 5, 6, or more up to 9, with 3 agents, and failed at creating cluster at all
k3d cluster create jblCluster --agents 3 --servers 9 --network jbl_network  -p 8080:80@agent[0] -p 8081:80@agent[1] -p 8090:8090@server[0]  -p 8091:8090@server[1] --api-port 0.0.0.0:7888

# to get the KUBECONFIG

kubectl config use-context k3d-jblCluster
bash-3.2$ kubectl cluster-info
Kubernetes master is running at https://0.0.0.0:7888
CoreDNS is running at https://0.0.0.0:7888/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://0.0.0.0:7888/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

what are agents and servers for k3s ? https://blog.alexellis.io/bare-metal-kubernetes-with-k3s/

@Jean-Baptiste-Lasselle
Copy link
Author

@Jean-Baptiste-Lasselle
Copy link
Author

Jean-Baptiste-Lasselle commented Apr 18, 2021

bash-3.2$ curl -ivvv https://0.0.0.0:7888/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy --insecure
*   Trying 0.0.0.0...
* TCP_NODELAY set
* Connected to 0.0.0.0 (127.0.0.1) port 7888 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/cert.pem
  CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Request CERT (13):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Certificate (11):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-ECDSA-AES256-GCM-SHA384
* ALPN, server did not agree to a protocol
* Server certificate:
*  subject: O=k3s; CN=k3s
*  start date: Apr 17 23:59:41 2021 GMT
*  expire date: Apr 18 00:00:19 2022 GMT
*  issuer: CN=k3s-server-ca@1618703981
*  SSL certificate verify result: unable to get local issuer certificate (20), continuing anyway.
> GET /api/v1/namespaces/kube-system/services/kube-dns:dns/proxy HTTP/1.1
> Host: 0.0.0.0:7888
> User-Agent: curl/7.64.1
> Accept: */*
> 
< HTTP/1.1 401 Unauthorized
< Cache-Control: no-cache, private
< Content-Type: application/json
< Date: Sun, 18 Apr 2021 00:25:19 GMT
< Content-Length: 165
< 
{ [165 bytes data]
100   165  100   165    0     0   4852      0 --:--:-- --:--:-- --:--:--  4852
* Connection #0 to host 0.0.0.0 left intact
* Closing connection 0
HTTP/1.1 401 Unauthorized
Cache-Control: no-cache, private
Content-Type: application/json
Date: Sun, 18 Apr 2021 00:25:19 GMT
Content-Length: 165

{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {
    
  },
  "status": "Failure",
  "message": "Unauthorized",
  "reason": "Unauthorized",
  "code": 401
}

@Jean-Baptiste-Lasselle
Copy link
Author

bash-3.2$ kubectl apply -f ingress-rapide-nginx-k3d.yaml
ingress.networking.k8s.io/nginx created
bash-3.2$ kubectl get all

NAME                         READY   STATUS    RESTARTS   AGE
pod/nginx-6799fc88d8-r7gg4   1/1     Running   0          5m8s

NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.43.0.1      <none>        443/TCP   8m22s
service/nginx        ClusterIP   10.43.112.75   <none>        80/TCP    4m55s

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx   1/1     1            1           5m8s

NAME                               DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-6799fc88d8   1         1         1       5m8s
bash-3.2$ 

@Jean-Baptiste-Lasselle
Copy link
Author

docker run busybox ping -c 1 docker.for.mac.localhost | awk 'FNR==2 {print $4}' | sed s'/.$//'

moby/moby#22753 (comment)

https://github.com/AlmirKadric-Published/docker-tuntap-osx

moby/moby#22753

ok so on mac there are specificity for networking -> i'll nswitch to another machine

@Jean-Baptiste-Lasselle
Copy link
Author

Jean-Baptiste-Lasselle commented Apr 18, 2021

# follow instructions at https://github.com/AlmirKadric-Published/docker-tuntap-osx
# instead of 'brew cask install tuntap' : 
brew  install --cask tuntap
# then to see the interfaces 'ifconfig | more' should find tap1 with the gateway ip address to use
# and then netmask i'll try like https://github.com/AlmirKadric-Published/docker-tuntap-osx/issues/31 usual 255.255.255.0 ...

git clone https://github.com/AlmirKadric-Published/docker-tuntap-osx
cd ./docker-tuntap-osx
./sbin/docker_tap_install.sh

# now docker MUST be restarted 
killall Docker && open /Applications/Docker.app
# if docker was stopped, just start it again
open /Applications/Docker.app

# once you made sure docker is started by running 'docker version', then execute this to bring up the tap interface : 

./sbin/docker_tap_up.sh

# then wait a little  and the tap interface will have an IP address you can pping from mac os host, and you can use as gateway to containers

I killed and restarted the docker daemon , and then ran again the instrcutions to create the tap1 , wait til docker is restarted, and then ran the tap up interface script, and after a minute or two , i get an ip address for tap1 ( execute ifconfig ):

tap1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
        ether 9e:71:03:4d:6c:1f 
        inet 10.0.75.1 netmask 0xfffffffc broadcast 10.0.75.3
        media: autoselect
        status: active
        open (pid 38576)

also at that point :

bash-3.2$ ping -c 4 10.0.75.1
PING 10.0.75.1 (10.0.75.1): 56 data bytes
64 bytes from 10.0.75.1: icmp_seq=0 ttl=64 time=0.083 ms
64 bytes from 10.0.75.1: icmp_seq=1 ttl=64 time=0.062 ms
64 bytes from 10.0.75.1: icmp_seq=2 ttl=64 time=0.047 ms
64 bytes from 10.0.75.1: icmp_seq=3 ttl=64 time=0.046 ms

--- 10.0.75.1 ping statistics ---
4 packets transmitted, 4 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.046/0.059/0.083/0.015 ms
  • another interesting point :
    • before this new configuration , the ip ddresses of the cluster nodes created with k3d were 192.168.0.*
    • and after the docker daemon is restarted, and the tap interface is brought up, my k3d cluster has disappeared,
    • and when i recreated my k3d cluster, ip addresses of cluster nodes are :
bash-3.2$ kubectl get nodes -o wide
NAME                      STATUS   ROLES                       AGE    VERSION        INTERNAL-IP   EXTERNAL-IP   OS-IMAGE   KERNEL-VERSION      CONTAINER-RUNTIME
k3d-jblcluster-agent-0    Ready    <none>                      78s    v1.20.5+k3s1   172.18.0.5    <none>        Unknown    4.19.121-linuxkit   containerd://1.4.4-k3s1
k3d-jblcluster-agent-1    Ready    <none>                      69s    v1.20.5+k3s1   172.18.0.6    <none>        Unknown    4.19.121-linuxkit   containerd://1.4.4-k3s1
k3d-jblcluster-agent-2    Ready    <none>                      59s    v1.20.5+k3s1   172.18.0.7    <none>        Unknown    4.19.121-linuxkit   containerd://1.4.4-k3s1
k3d-jblcluster-server-0   Ready    control-plane,etcd,master   113s   v1.20.5+k3s1   172.18.0.2    <none>        Unknown    4.19.121-linuxkit   containerd://1.4.4-k3s1
k3d-jblcluster-server-1   Ready    control-plane,etcd,master   100s   v1.20.5+k3s1   172.18.0.3    <none>        Unknown    4.19.121-linuxkit   containerd://1.4.4-k3s1
k3d-jblcluster-server-2   Ready    control-plane,etcd,master   84s    v1.20.5+k3s1   172.18.0.4    <none>        Unknown    4.19.121-linuxkit   containerd://1.4.4-k3s1

ok now I add the ip route to reach the containers IP adresses :

# route add -net 172.18.0.0/16 -netmask <IP MASK> 10.0.75.2
route add -net 172.18.0.0/16  10.0.75.2
  • and suddenly yeesss!!! it works !! :
bash-3.2$ kubectl get nodes -o wide
NAME                      STATUS   ROLES                       AGE    VERSION        INTERNAL-IP   EXTERNAL-IP   OS-IMAGE   KERNEL-VERSION      CONTAINER-RUNTIME
k3d-jblcluster-agent-0    Ready    <none>                      78s    v1.20.5+k3s1   172.18.0.5    <none>        Unknown    4.19.121-linuxkit   containerd://1.4.4-k3s1
k3d-jblcluster-agent-1    Ready    <none>                      69s    v1.20.5+k3s1   172.18.0.6    <none>        Unknown    4.19.121-linuxkit   containerd://1.4.4-k3s1
k3d-jblcluster-agent-2    Ready    <none>                      59s    v1.20.5+k3s1   172.18.0.7    <none>        Unknown    4.19.121-linuxkit   containerd://1.4.4-k3s1
k3d-jblcluster-server-0   Ready    control-plane,etcd,master   113s   v1.20.5+k3s1   172.18.0.2    <none>        Unknown    4.19.121-linuxkit   containerd://1.4.4-k3s1
k3d-jblcluster-server-1   Ready    control-plane,etcd,master   100s   v1.20.5+k3s1   172.18.0.3    <none>        Unknown    4.19.121-linuxkit   containerd://1.4.4-k3s1
k3d-jblcluster-server-2   Ready    control-plane,etcd,master   84s    v1.20.5+k3s1   172.18.0.4    <none>        Unknown    4.19.121-linuxkit   containerd://1.4.4-k3s1
bash-3.2$ route add -net 172.18.0.0/16  10.0.75.2
route: must be root to alter routing table
bash-3.2$ sudo route add -net 172.18.0.0/16  10.0.75.2
Password:
add net 172.18.0.0: gateway 10.0.75.2
bash-3.2$ ping -c 4 172.18.0.6
PING 172.18.0.6 (172.18.0.6): 56 data bytes
64 bytes from 172.18.0.6: icmp_seq=0 ttl=63 time=0.356 ms
64 bytes from 172.18.0.6: icmp_seq=1 ttl=63 time=0.183 ms
64 bytes from 172.18.0.6: icmp_seq=2 ttl=63 time=0.192 ms
64 bytes from 172.18.0.6: icmp_seq=3 ttl=63 time=0.166 ms

--- 172.18.0.6 ping statistics ---
4 packets transmitted, 4 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.166/0.224/0.356/0.077 ms

@Jean-Baptiste-Lasselle
Copy link
Author

Jean-Baptiste-Lasselle commented Apr 18, 2021

ok i now continue and try and provision metallb :

  • i prepare the config map for metallb :
cat <<EOF > ./metallb-config.yaml 
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      # - 192.168.1.240-192.168.1.250
      - 172.18.1.5-172.18.1.250
EOF
  • then metallb documentation says a configration. of kube proxy must be set. But there is no Kube proxy into the k3d created cluster. Will that be a problem ? I'll proceed with next installation steps ignoring the kube proxy config part, and we'll see what happens
  • execute :
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.6/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.6/manifests/metallb.yaml
# On first install only
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
kubectl apply -f ./metallb-config.yaml

kubectl get all -n metallb-system
  • Before that, Traefik Load Balancer has several External IPs, out of the range of IPs managed by metallb :
NAMESPACE     NAME                         TYPE           CLUSTER-IP      EXTERNAL-IP                                                         PORT(S)                      AGE
default       service/kubernetes           ClusterIP      10.43.0.1       <none>                                                              443/TCP                      65m
kube-system   service/kube-dns             ClusterIP      10.43.0.10      <none>                                                              53/UDP,53/TCP,9153/TCP       65m
kube-system   service/metrics-server       ClusterIP      10.43.96.147    <none>                                                              443/TCP                      65m
kube-system   service/traefik              LoadBalancer   10.43.13.247    172.18.0.2,172.18.0.3,172.18.0.4,172.18.0.5,172.18.0.6,172.18.0.7   80:32315/TCP,443:32067/TCP   65m
  • The External IPs that Traefik has, are exactly the cluster nodes IP
  • After Metallb is provisioned and configured, traefik has a new Exernal IP, in the range managed by metallb 172.18.1.5
  • never the less i cannot ping 172.18.1.5 form the host, nut I can ping
  • Ok, so to understand what happens here, I deploy an nginx and expose it with type LoadBalancer , to see what ip addresses it will get :
kubectl create deployment nginx --image=nginx
kubectl expose deploy nginx --port 8087 --type LoadBalancer
  • surprisingly, again, my nginx gets ip adresseof the cluster nodes (so something is acting instead of metallb ) :
bash-3.2$ kubectl get all
NAME                         READY   STATUS              RESTARTS   AGE
pod/nginx-6799fc88d8-clftp   1/1     Running             0          10m
pod/svclb-nginx-4v9dn        1/1     Running             0          32s
pod/svclb-nginx-cmlfl        1/1     Running             0          32s
pod/svclb-nginx-gfmdk        1/1     Running             0          32s
pod/svclb-nginx-ks2hn        1/1     Running             0          32s
pod/svclb-nginx-ph4rh        0/1     ContainerCreating   0          32s
pod/svclb-nginx-rk6pk        1/1     Running             0          32s

NAME                 TYPE           CLUSTER-IP      EXTERNAL-IP                                              PORT(S)          AGE
service/kubernetes   ClusterIP      10.43.0.1       <none>                                                   443/TCP          81m
service/nginx        LoadBalancer   10.43.185.243   172.18.0.3,172.18.0.4,172.18.0.5,172.18.0.6,172.18.0.7   8087:31241/TCP   33s

NAME                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/svclb-nginx   6         6         1       6            1           <none>          33s

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx   1/1     1            1           10m

NAME                               DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-6799fc88d8   1         1         1       10m

@Jean-Baptiste-Lasselle
Copy link
Author

ok it seems like the networking established for mac os docker is pretty unstable i'll definitely have to switch to solid debian

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant