Does A Volume's Replica(s) Influence Pod Scheduling? #7820
Unanswered
jsievenpiper
asked this question in
Q&A
Replies: 1 comment 1 reply
-
Longhorn will try to maintain a replica on the same nodes as the workload when the volume's data locality is configured as https://longhorn.io/docs/1.5.3/high-availability/data-locality/#data-locality-settings |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I've recently started moving workloads to replicated longhorn volumes, and I'm trying to understand some behaviors that I couldn't quite grok from the documentation. I'm super happy to open a PR to update the documentation after sorting through this if it's helpful to others.
I have a standard deployment (single replica, if it matters) referencing a replicated longhorn volume. The volumes are spread across multiple nodes. If I restart the deployment, the pod will move around nodes appropriately -- but it doesn't seem to be influenced by the existing longhorn volumes. Usually leading to longhorn creating another volume replica, delaying pod spin up (and then subsequently destroying some other replica to bring the replica count back into check).
I have the volume locality set as
best-effort
, but these nodes have tons of capacity to them.Is this expected? Everything is working otherwise, just seems odd to thrash volumes in that way -- admittedly I know very little about how the scheduler can be influenced!
Thanks in advance!
Beta Was this translation helpful? Give feedback.
All reactions