How can I make the kube scheduler aware of the underlying storage constraints in longhorn? #7717
Unanswered
MisguidedEmails
asked this question in
Q&A
Replies: 1 comment
-
This definitely feels like a hole in longhorn, or potentially a bug. I have 3/6 nodes disabled (to try and coerce a 3-replica stateful set to be deployed on specific nodes), but longhorn is happy to satisfy a PVC by creating PV's on those Really the only option might be to pre-create PV's when I know this is going to happen. Though I shouldn't have to manhandle and babysit the storage system like this. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
The kube scheduler isn't aware of what's happening in longhorn, so it will happily schedule pods wherever it wants based on cpu/memory. This leads to a few problems:
I'm not aware of any manual workaround for this that isn't specifically constraining/restricting volumes and pods from being on certain nodes. e.g. pod antiaffinity, volume nodeSelector.
So my question is - is there any way to help schedule pods and volumes on nodes that have less storage allocated/used?
I know of the Storage Capacity Tracking CSI feature, but that only cares about unbound PVCs, besides longhorn doesn't support that capability anyway. It does seem that it's possible to make the scheduler aware of this information (like what topolvm does).
Am I missing something? Or am I at the whim of the kube scheduler of whether or not my nodes will run out of disk space?
Beta Was this translation helpful? Give feedback.
All reactions