Prevent existence of multiple active CAs on the same node #121
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR is needed because if multiple CAs are running on the same node they might interfere when interacting with OvS, producing safety violations.
This PR guarantees that there cannot be multiple CAs doing real work on a node by having each CA attempt to acquire a lock file on the node at start up. "Doing real work" means anything the CA normally does.
Note that this is different than ensuring there are no CAs on the same node, because if a CA C2 attempts to acquire the lock when another CA C1 on the same node is holding the lock, C2 simply waits until C1 terminates and releases the lock. At that point C2 acquires the lock and proceeds to become the active CA on the node.
I would prefer a different behavior (fail-on-lock-contention): if C2 attempts to acquire the lock when it is already held by C1 C2 terminates immediately (possibly making some noise, such as a K8s event, since this is a situation we're not expecting).
The wait-on-lock-contention behavior was chosen because that's the behavior of the already present K8s utils file locking functionality; this way, we can reuse that.
But note that if we do want to switch to the fail-on-lock-contention, that's trivial to implement (the resulting function body would be 5 lines of code).
I also plan to ask to the SIG node people if they'd be interested in adding the option to have fail-on-lock-contention and why the wait-on-lock-contention behavior was chosen to start with (I suspect it's to allow some form of coordination between an old kubelet and a new, upgraded kubelet instance on the same node).
But what's here is already enough to avoid safety violations.