You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
(Yep, I did read the template; but for some odd reason I am not seing the signup verification email. I am pretty sure it's a layer 8 problem... so, apologies in advance!)
Hello! I am trying to bootstrap the NFS-CSI driver off the helm chart in a k3s cluster - only one node for now, I intend to grow it to a few more once I have my base config figured out. But, this means that this message:
kube-system 0s Warning FailedScheduling Pod/csi-nfs-controller-59b87c6c7c-ktfh7 0/1 nodes are available: 1 node(s) didn't match Pod's node affinity/selector. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
isn't helping a whole lot. So I have tried to get rid of this but no matter to what I set controller.tolerations, I keep getting that warning.
First, here's my HelmChart and values as kubectl applyd to the k3s node:
apiVersion: helm.cattle.io/v1kind: HelmChartmetadata:
name: nfs-csi-chartnamespace: kube-systemspec:
repo: https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/chartschart: csi-driver-nfs#version: latesttargetNamespace: kube-systemvaluesContent: |- serviceAccount: create: true # When true, service accounts will be created for you. Set to false if you want to use your own. # controller: csi-nfs-controller-sa # Name of Service Account to be created or used # node: csi-nfs-node-sa # Name of Service Account to be created or used rbac: create: true name: nfs driver: name: nfs.csi.k8s.io mountPermissions: 0 feature: enableFSGroupPolicy: true enableInlineVolume: false propagateHostMountOptions: false # do I have to change that?; k3s on /mnt/usb/k3s but no kubelet dir kubeletDir: /var/lib/kubelet controller: # TODO: do i need to true them? runOnControlPlane: true runOnMaster: true logLevel: 5 workingMountDir: /tmp defaultOnDeletePolicy: retain # available values: delete, retain priorityClassName: system-cluster-critical # FIXME: better solution??? tolerations: [] node: name: csi-nfs-node # TODO: sync to backup externalSnapshotter: enabled: false name: snapshot-controller priorityClassName: system-cluster-critical # Create volume snapshot CRDs. customResourceDefinitions: enabled: true #if set true, VolumeSnapshot, VolumeSnapshotContent and VolumeSnapshotClass CRDs will be created. Set it false, If they already exist in cluster.
---
apiVersion: storage.k8s.io/v1kind: StorageClassmetadata:
name: nfs-bunkerprovisioner: nfs.csi.k8s.ioparameters:
# alt. use tailscale IPserver: 192.168.1.2share: /mnt/vol1/Services/k3sreclaimPolicy: RetainvolumeBindingMode: ImmediatemountOptions:
- nfsvers=4.1
When I look at the generated pod that throws the error, I can see the tolerations right then and there:
(Yep, I did read the template; but for some odd reason I am not seing the signup verification email. I am pretty sure it's a layer 8 problem... so, apologies in advance!)
Hello! I am trying to bootstrap the NFS-CSI driver off the helm chart in a k3s cluster - only one node for now, I intend to grow it to a few more once I have my base config figured out. But, this means that this message:
isn't helping a whole lot. So I have tried to get rid of this but no matter to what I set
controller.tolerations
, I keep getting that warning.First, here's my HelmChart and values as
kubectl apply
d to the k3s node:When I look at the generated pod that throws the error, I can see the tolerations right then and there:
Is there something I overlooked to make the controller properly schedule onto my node? Looking at the node itself shows the related taints:
Node spec
Do you perhaps see something that I missed?
Thank you and kind regards,
Ingwie
The text was updated successfully, but these errors were encountered: