All backups are running on a single node #8271
Replies: 9 comments
-
Hi @erichevers Would you mind describing more about the "backup" you referred to? Thanks! |
Beta Was this translation helpful? Give feedback.
-
Hi @ChanYiLin, |
Beta Was this translation helpful? Give feedback.
-
Hi @erichevers We do have settings to control where it can run You can find the setting in the longhorn dashboard. Here is the doc: https://longhorn.io/docs/1.6.1/references/settings/#danger-zone
If all the jobs run on the same node, maybe that is because these two settings limit the choice of where they can run on. |
Beta Was this translation helpful? Give feedback.
-
Hi @ChanYiLin , Regards, |
Beta Was this translation helpful? Give feedback.
-
Can you use kubectl to see if those pods have taints or affinities? Could you also please provide more details of your cluster and environment? Thanks |
Beta Was this translation helpful? Give feedback.
-
Hi @ChanYiLin , "kubectl describe cronjob c-t0dgtz -n longhorn-system" gives: The pod that was running for this cronjob. The job that was running. I don't see any taints or affinities in any of these |
Beta Was this translation helpful? Give feedback.
-
How many nodes in your cluster? If there is no taints or affinities then the scheduling is decided by k8s and should be round robin |
Beta Was this translation helpful? Give feedback.
-
Hi @ChanYiLin, |
Beta Was this translation helpful? Give feedback.
-
In this case, the backup jobs are assigned to the same node. They are randomly determined by the engine proxies in the IM pods. |
Beta Was this translation helpful? Give feedback.
-
In our cluster all the backups are running on the same single node. We have over 60 pvc's so the node is complaining that it has too many pods running. Is it normal that all the backups only use one node, or is it possible to spread the load over multiple nodes?
Beta Was this translation helpful? Give feedback.
All reactions