-
Notifications
You must be signed in to change notification settings - Fork 751
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multus is no longer starting (infinite loop) #4467
Comments
Hi @ohault What environment are you using? Is this perhaps on LXD? One thing you could try as a work-around is to edit the multus daemonset and remove the |
I'm using WSL2.
I'm using WSL2. |
OK, thank you. Could you check whether the proposed workaround above helps? |
No so easy to test as multus.yaml is in ro in the /snap tree because I used snap to install microk8s. |
OK, the addons themselves are read-write, and can be found under |
It's exactly what I tried, but I got an error message like "The filesystem is read only". |
I think there must be a mixup here. The read-only parts of microk8s will be in Can you try editing the file again? The command should be
Or can you share the exact error message that you are getting? |
I have just tried from a fresh new system. |
Summary
Using stable channel 1.28/stable, multus daemonset cannot be started meanwhile it can be started using 1.24/stable
Reproduction Steps
sudo snap install microk8s --classic
microk8s enable multus
=>
Waiting for multus daemonset to start................................
sudo snap install microk8s --classic --channel=1.24/stable
microk8s enable multus
=>
Waiting for multus daemonset to start................................
Multus is enabled
Multus is enabled with version:
multus-cni version:v3.4.2, commit:4eac660359f223d34bcaf0fddbc42fd542f02ba1, date:2020-05-15T12:43:46+0000
Introspection Report
microk8s kubectl describe pod -n kube-system
=>
Name: kube-multus-ds-9mnfc
Namespace: kube-system
Priority: 0
Service Account: multus
Node: pcwin11oha/172.27.218.10
Start Time: Sun, 24 Mar 2024 12:05:10 +0100
Labels: app=multus
controller-revision-hash=69c976674c
name=multus
pod-template-generation=1
tier=node
Annotations:
Status: Pending
IP: 172.27.218.10
IPs:
IP: 172.27.218.10
Controlled By: DaemonSet/kube-multus-ds
Init Containers:
install-multus-binary:
Container ID:
Image: ghcr.io/k8snetworkplumbingwg/multus-cni:v3.9
Image ID:
Port:
Host Port:
Command:
cp
/usr/src/multus-cni/bin/multus
/host/opt/cni/bin/multus
State: Waiting
Reason: CreateContainerError
Ready: False
Restart Count: 0
Requests:
cpu: 10m
memory: 15Mi
Environment:
Mounts:
/host/opt/cni/bin from cnibin (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h56gq (ro)
Containers:
kube-multus:
Container ID:
Image: ghcr.io/k8snetworkplumbingwg/multus-cni:v3.9
Image ID:
Port:
Host Port:
Command:
/entrypoint.sh
Args:
--multus-conf-file=auto
--multus-kubeconfig-file-host=/var/snap/microk8s/current/args/cni-network/multus.d/multus.kubeconfig
--cni-version=0.3.1
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Limits:
cpu: 100m
memory: 50Mi
Requests:
cpu: 100m
memory: 50Mi
Environment:
Mounts:
/host/etc/cni/net.d from cni (rw)
/host/opt/cni/bin from cnibin (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h56gq (ro)
Conditions:
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
cni:
Type: HostPath (bare host directory volume)
Path: /var/snap/microk8s/current/args/cni-network/
HostPathType:
cnibin:
Type: HostPath (bare host directory volume)
Path: /var/snap/microk8s/current/opt/cni/bin/
HostPathType:
kube-api-access-h56gq:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
QoS Class: Burstable
Node-Selectors:
Tolerations: :NoSchedule op=Exists
node.kubernetes.io/disk-pressure:NoSchedule op=Exists
node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/network-unavailable:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists
node.kubernetes.io/pid-pressure:NoSchedule op=Exists
node.kubernetes.io/unreachable:NoExecute op=Exists
node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
Type Reason Age From Message
Normal Scheduled 5m7s default-scheduler Successfully assigned kube-system/kube-multus-ds-9mnfc to pcwin11oha
Normal Pulling 5m7s kubelet Pulling image "ghcr.io/k8snetworkplumbingwg/multus-cni:v3.9"
Normal Pulled 4m34s kubelet Successfully pulled image "ghcr.io/k8snetworkplumbingwg/multus-cni:v3.9" in 32.058s (32.058s including waiting)
Warning Failed 4m34s kubelet Error: failed to generate container "50566bcd392f86d00a35d4aded6520f1e32dc0f373ed68ec90aed66cf175a8e8" spec: failed to generate spec: path "/var/snap/microk8s/current/opt/cni/bin/" is mounted on "/" but it is not a shared mount
Warning Failed 4m34s kubelet Error: failed to generate container "93dfaf569e709eb49cdb7219cbab9fa21043e8022aeb2648f738939c0220247a" spec: failed to generate spec: path "/var/snap/microk8s/current/opt/cni/bin/" is mounted on "/" but it is not a shared mount
Warning Failed 4m21s kubelet Error: failed to generate container "c4c5f7b86be8da6eaf7f881a3de956b5f03c61984d28ae99feb4d09b8be73dab" spec: failed to generate spec: path "/var/snap/microk8s/current/opt/cni/bin/" is mounted on "/" but it is not a shared mount
Warning Failed 4m8s kubelet Error: failed to generate container "0b1ecb344511f725ce7a0ad7fc8dad952d13aa63da1c852dcd96e2f1febc9984" spec: failed to generate spec: path "/var/snap/microk8s/current/opt/cni/bin/" is mounted on "/" but it is not a shared mount
Warning Failed 3m53s kubelet Error: failed to generate container "717c88b449c5ff933b7a53f5f460339ddb382abf7d6170cdcd2d53a73ba927f1" spec: failed to generate spec: path "/var/snap/microk8s/current/opt/cni/bin/" is mounted on "/" but it is not a shared mount
Warning Failed 3m41s kubelet Error: failed to generate container "8fc8635ed7ea0967fdab20745e0811ee8125a7b082037188cf4af56912dc6683" spec: failed to generate spec: path "/var/snap/microk8s/current/opt/cni/bin/" is mounted on "/" but it is not a shared mount
Warning Failed 3m29s kubelet Error: failed to generate container "6541ec4495b46a00116b3a7af15fb554a2e0bb1dd478d64b10c87daf041c1418" spec: failed to generate spec: path "/var/snap/microk8s/current/opt/cni/bin/" is mounted on "/" but it is not a shared mount
Warning Failed 3m14s kubelet Error: failed to generate container "0252bd0225b048d393b671fd46439c28a3b315ab5c8dc71702983f6944036b75" spec: failed to generate spec: path "/var/snap/microk8s/current/opt/cni/bin/" is mounted on "/" but it is not a shared mount
Warning Failed 3m3s kubelet Error: failed to generate container "599cb58f63f08c8065aca17f902d908b20c1a846951b91177e95429894062b6b" spec: failed to generate spec: path "/var/snap/microk8s/current/opt/cni/bin/" is mounted on "/" but it is not a shared mount
Warning Failed 2m23s (x3 over 2m49s) kubelet (combined from similar events): Error: failed to generate container "5cc95b135d7b09cbf3cedd6eb9f061a1aeb827cadc4706914dc65301f048b587" spec: failed to generate spec: path "/var/snap/microk8s/current/opt/cni/bin/" is mounted on "/" but it is not a shared mount
Normal Pulled 6s (x22 over 4m34s) kubelet Container image "ghcr.io/k8snetworkplumbingwg/multus-cni:v3.9" already present on machine
The text was updated successfully, but these errors were encountered: