You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Additional context
My suspicion is that umounts fail when the jail stops (maybe due to some processes still using the mountpoint). Later the ZFS filesystem is purged. Normal manual umount of these mounts work ok.
The text was updated successfully, but these errors were encountered:
So it looks like, the pot is stopped and destroyed twice. The second stopping call is after 5s, which looks like a nomad timeout. So the solution for this might be inside nomad, but it also feels like there's a lack of locking involved, being able to call stop and destroy multiple times in parallel.
Describe the bug
When using pot with nomad, nomad's special directory mount-ins stay behind.
To Reproduce
Run a basic nomad pot example (like nginx) and migrate it a couple of times (start/stop etc.).
After a while you will see something like this, even though only one container is running:
Expected behavior
No leftover mounts
Additional context
My suspicion is that umounts fail when the jail stops (maybe due to some processes still using the mountpoint). Later the ZFS filesystem is purged. Normal manual umount of these mounts work ok.
The text was updated successfully, but these errors were encountered: