You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
i'm prety new to Kubernetes and Longhorn. I recently started to migrate my local home lab from Docker Swarm with GlusterFS to Kubernetes with Longhorn.
I am starting to get familiar with everything, and start to like how everything works. I have 3 Nodes, wich serve as Master, Worker and Longhorn storage. All on different machines. (Two virtualized on different pve hosts and one bare metal). Because on the cluster are some services (Smarthome, DNS etc.) wich would disturb the relationship to my wife on failure, i decided on 3 nodes to have automatic recovery.
Everything seems to work fine, but if i shut down k3s on one node, or shut it down completely, Longhorn keeps the volumes on the node attached, first trying to detach them (but dont realy do it) untill the new pod decides to kick in, then it's stuck in attaching without recover untill the "failing" node comes back up.
Does anyone have a hint how i could Longhorn to force detach from a failing node?
Edit: even on shutdown of all nodes, after booting up two of the nodes, it still thinks that the volume is mounted to the "failed" node and does not automatically detach it to free it for recreation of the new pod
Edit2: i don't know why. But right after my last edit, the volume got detached and the new pod was created.
I double checked, started the node, moved one of the pods, and shut it down again. It takes a litle bit (aprox. 3 - 5 minutes) but it now seems to work like i thought it would be, based on my knowledge of swarm.
Is there some way to shorten the time it takes to forcefully detach a volume? sub 1 minute would be a great.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hi,
i'm prety new to Kubernetes and Longhorn. I recently started to migrate my local home lab from Docker Swarm with GlusterFS to Kubernetes with Longhorn.
I am starting to get familiar with everything, and start to like how everything works. I have 3 Nodes, wich serve as Master, Worker and Longhorn storage. All on different machines. (Two virtualized on different pve hosts and one bare metal). Because on the cluster are some services (Smarthome, DNS etc.) wich would disturb the relationship to my wife on failure, i decided on 3 nodes to have automatic recovery.
Everything seems to work fine, but if i shut down k3s on one node, or shut it down completely, Longhorn keeps the volumes on the node attached, first trying to detach them (but dont realy do it) untill the new pod decides to kick in, then it's stuck in attaching without recover untill the "failing" node comes back up.
Does anyone have a hint how i could Longhorn to force detach from a failing node?
Edit: even on shutdown of all nodes, after booting up two of the nodes, it still thinks that the volume is mounted to the "failed" node and does not automatically detach it to free it for recreation of the new pod
Edit2: i don't know why. But right after my last edit, the volume got detached and the new pod was created.
I double checked, started the node, moved one of the pods, and shut it down again. It takes a litle bit (aprox. 3 - 5 minutes) but it now seems to work like i thought it would be, based on my knowledge of swarm.
Is there some way to shorten the time it takes to forcefully detach a volume? sub 1 minute would be a great.
Beta Was this translation helpful? Give feedback.
All reactions