Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

volsupervisor does not free locks if it is terminated during snapshot operations #427

Open
erikh opened this issue Aug 17, 2016 · 21 comments

Comments

@erikh
Copy link
Contributor

erikh commented Aug 17, 2016

This is the issue with the VolsupervisorRestart test. Fixing this today, but the repro is basically:

  • Create a volume (and mount it or create it in unlocked mode so it gets snapped)
  • Wait a minute until a snapshot is in progress (watch the users keyspace for the snapshot lock)
  • Restart volsupervisor
  • Snapshots can no longer be taken because of the stale lock.

Steps to fix:

  • Add a lock for running volsupervisor itself. This lock should live as a part of the users keyspace (a new UseLocker) and should be refreshed by volsupervisor through the lock library via acquire with ttl refresh.
  • Convert the ExecuteWithUseLock to AcquireWithTTLRefresh.
@vvb
Copy link
Contributor

vvb commented Aug 17, 2016

how about something in-between,

  1. volsupervisor is taking a snapshot and crashes while still holding the lock on the volume-X
  2. new volsupervisor with a new PID comes up.
  3. If and when the snapshot is due on volume-X, volsupervisor accesses and figures a pre-existing lock on the volume.
  4. It also accesses that a snapshotting_in_process_timer is not punched for X iterations. (Generally, snapshotting_in_process_timer should be punched say every X secs when a lock is held)
  5. And resets the lock
  6. Life is good again!

This should safe-guard us against quick restarts and also ensure that we don't get stuck with stale locks.

@erikh
Copy link
Contributor Author

erikh commented Aug 17, 2016

that's a TTL

@erikh
Copy link
Contributor Author

erikh commented Aug 17, 2016

the problem with assuming pre-existing locks from a new process is that it's terribly hard to determine whether or not that lock is stale or not.

@vvb
Copy link
Contributor

vvb commented Aug 17, 2016

the problem with assuming pre-existing locks from a new process is that it's terribly hard to determine whether or not that lock is stale or not.

I think not, if we define what do we consider stale. Stale may not mean that older volsup process is dead. It might be busy. But let's say Stale == Volsup not punching the timer for 3 iterations. The difference between this and TTL is that, the etcd lock on the volume is not removed after 3 iterations. If the busy volsup recovers, it still gets time to claim its lock.

Unless, a new volsup comes in and wants to operate on the same volume, in that duration when the older process is busy and has run out of the pre-defined time where we consider a lock stale. In that scenario, the new process can clear the lock and continue. when the older volsup recovers, it should recognise that it has lost the lock.

@erikh
Copy link
Contributor Author

erikh commented Aug 17, 2016

Yes, the TTL refresh accomplishes “punching the timer”. Look at the
lock/lock.go code… AcquireTTLRefresh I think.

What will happen is that if that code is no longer running, the lock
will expire after the TTL does.

We should never allow two volsupervisors to run. I guess part of the fix
here is making sure that never happens.

But your paragraphs here are basically describing what the
aforementioned call does. :D

On 17 Aug 2016, at 16:40, Vikrant Balyan wrote:

the problem with assuming pre-existing locks from a new process is
that it's terribly hard to determine whether or not that lock is
stale or not.

I think not, if we define what do we consider stale. Stale may not
mean that older volsup process is dead. It might be busy. But let's
say Stale == Volsup not punching the timer for 3 iterations. The
difference between this and TTL is that, the etcd lock on the volume
is not removed after 3 iterations. If the busy volsup recovers, it
still gets time to claim its lock.

Unless, a new volsup comes in and wants to operate on the same volume,
in that duration when the older process is busy and has run out of the
pre-defined time where we consider a lock stale. In that scenario, the
new process can clear the lock and continue. when the older volsup
recovers, it should recognise that it has lost the lock.

You are receiving this because you authored the thread.
Reply to this email directly or view it on GitHub:
#427 (comment)

@yuva29
Copy link
Contributor

yuva29 commented Aug 17, 2016

Is it possible to free all the snap locks when volsupervisor gets terminated during restart ? because all those locks indicate the "snap in progress" right ? Let me know if i'm missing something here.

When only 1 volsupervisor is running, wouldn't it the first option work well?

@vvb
Copy link
Contributor

vvb commented Aug 18, 2016

@erikh given the distributed architecture and ability to run volsup on any of the nodes, it is hard to ensure that more than one volsup do not come up, even momentarily. May be, if we could create some form of node level constraint for volsup process and restrict it to a node(which should be runtime user configurable), then we can check things at a process level - ps aux volsupervisor &>/dev/null || ./volsupervisor

@erikh
Copy link
Contributor Author

erikh commented Aug 18, 2016

If it doesn't terminate cleanly this doesn't work.

On 17 Aug 2016, at 16:56, Yuva Shankar wrote:

Is it possible to free all the snap locks when volsupervisor gets
terminated during restart ? because all those locks indicate the
"snap in progress" right ? Let me know if i'm missing something here.

When only 1 volsupervisor is running, wouldn't it the first option
work well?

You are receiving this because you authored the thread.
Reply to this email directly or view it on GitHub:
#427 (comment)

@erikh
Copy link
Contributor Author

erikh commented Aug 18, 2016

@vvb right, I had previously made that the end-user's job but perhaps it is time to handle this ourselves. Perhaps using TTLs and then a lock for running volsupervisor would be best.

@dseevr
Copy link
Contributor

dseevr commented Aug 18, 2016

What actually happens in the case where a snapshot is triggered while a snapshot is still running? I/O just gets punished until one/both complete?

In either proposed solution, isn't the worst case scenario that one snapshot can overlap another snapshot in progress (per volume, per volsupervisor restart/network partition)?

solution 1: new volsupervisor starts, grabs a lock, issues a snapshot (now two are running), no other snapshots can be queued because of the lock

solution 2: network partition, lock expires while snapshot is still running, volsupervisor grabs a new lock, issues a snapshot (now two are running), no other snapshots can be queued because of the lock

As mentioned, both of these cases can be triggered repeatedly in a flapping/bad network scenario which could lead to multiple snapshots piling up on top of each other.

What if we make volsupervisor aware of its past instances? (This assumes we make it so only one volsupervisor can be running cluster-wide)

e.g., if volsupervisor grabs a lock so it can start, it will then check a key to see when the last volsupervisor instance was started. If the last start time was too recent, it will wait a configurable amount of time before it attempts to clear locks and allow normal snapshotting operations to resume.

Not a solution, but it could mitigate the rapid restart/overlapping snapshots problem.

Another idea: could we actually query the volume/daemon itself and check if a snapshot is running on each volume we know about? If none are running, clear all the locks and resume normal operations. If some are running, wait until they're all finished before resuming normal operations.

Sorry for the long comment.

@erikh
Copy link
Contributor Author

erikh commented Aug 18, 2016

inline:

What actually happens in the case where a snapshot is triggered while
a snapshot is still running? I/O just gets punished until one/both
complete?

In either proposed solution, isn't the worst case scenario that one
snapshot can overlap another snapshot in progress (per volume, per
volsupervisor restart/network partition)?

solution 1: new volsupervisor starts, grabs a lock, issues a snapshot
(now two are running), no other snapshots can be queued because of the
lock

solution 2: network partition, lock expires while snapshot is still
running, volsupervisor grabs a new lock, issues a snapshot (now two
are running), no other snapshots can be queued because of the lock

Right, but if we have a situation where the volsupervisor is restarted,
we may be sending N snapshots instead of just 1.

As mentioned, both of these cases can be triggered repeatedly in a
flapping/bad network scenario which could lead to multiple snapshots
piling up on top of each other.

What if we make volsupervisor aware of its past instances? (This
assumes we make it so only one volsupervisor can be running
cluster-wide)

e.g., if volsupervisor grabs a lock so it can start, it will then
check a key to see when the last volsupervisor instance was started.
If the last start time was too recent, it will wait a configurable
amount of time before it attempts to clear locks and allow normal
snapshotting operations to resume.

Not a solution, but it could mitigate the rapid restart/overlapping
snapshots problem.

Yes, that’s what @vvb and I were discussing. I think that’s a
requirement and will update the ticket header accordingly.

Another idea: could we actually query the volume/daemon itself and
check if a snapshot is running on each volume we know about? If none
are running, clear all the locks and resume normal operations. If
some are running, wait until they're all finished before resuming
normal operations. Sorry for the long comment.

The use locks accomplish this. Each snapshot acquires a lock while it is
taking the snapshot. These locks can be queried. Doing this at the
storage level seems like a hard, hard, hard problem to do correctly for
all storage out there.

-Erik

@dseevr
Copy link
Contributor

dseevr commented Aug 18, 2016

Right, I get that, but we can't trust that those use locks are necessary in a restart scenario. Is there no easy way to do something like this:

for _, volume := range volumesWithSnapshotsInProgressAtStartup() {
    go pollUntilSnapshotIsFinishedAndDeleteUseLock(volume)
}

and have it query the actual ceph daemon to see its status (snapshotting or otherwise)?

@erikh
Copy link
Contributor Author

erikh commented Aug 18, 2016

if we use an expiry TTL it will automatically go away

@erikh
Copy link
Contributor Author

erikh commented Aug 18, 2016

as for ceph q's, like I explained this does not translate well to different storage architectures where we may or may not know whether a snapshot can be taken. I don't think it is wise to solve this problem at that level.

@dseevr
Copy link
Contributor

dseevr commented Aug 18, 2016

Is it worse to have a dangling lock which is deleted after a configurable amount of time (whether by the original volsupervisor that created it or any future one) potentially blocking normal snapshot operations for a while or to have a lock expire prematurely and have multiple snapshot operations in progress at once?

@erikh
Copy link
Contributor Author

erikh commented Aug 18, 2016

the former, imo. what do you think?

@dseevr
Copy link
Contributor

dseevr commented Aug 18, 2016

I would also prefer the former.

@erikh
Copy link
Contributor Author

erikh commented Aug 18, 2016

yep. I've updated the ticket with the design requirements. Please review @yuva29 @dseevr @vvb

@dseevr
Copy link
Contributor

dseevr commented Aug 18, 2016

LGTM

@erikh
Copy link
Contributor Author

erikh commented Aug 18, 2016

to be clear, the former would block new snap operations, not increase I/O

@vvb
Copy link
Contributor

vvb commented Aug 18, 2016

LGTM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants