New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Elektra's locking strategy while recording #4934
Comments
The semlock plugin which was removed, because it didn't work on NFS, in 61de4dc @kodebach wrote:
Yes, I agree. Ideally it wouldn't introduce a new source by simply writing the change-tracking journal before the kdbSet commit happens (and conflicts while writing the change-tracking journal are the same as writing to other config files). As this would probably a quite big change (and maybe even does not work if you need to write something during kdbGet?), having proper locking and properly documenting what that means sounds like the second best option. Is it at least guaranteed that you don't get conflicts while recording? Having both conflicts and locks sounds like the worst of both worlds. |
We can do that, but that would mean a bit of refactoring and providing a mechanism of a "distributed transaction" between two KDB instances. (Easiest way is probably to allow a way to manually call On the other hand, if we don't care (or if it's incredibly unlikely) that the
It depends on what you mean by "conflict". We abort immediately if we can't get the lock, so it behaves the same way as if another process would write to the same configuration file. Applications already need to handle that. And if they use the high-level bindings that's already handled for them. If we get the lock then it's guaranteed that no conflicts can happen. |
Based on what I can see in #4892, I'd also say this would be very complicated to do. If we update the session and then commit the changes, we need a rollback procedure for the session in case the commit fails. Otherwise, we end up with changes in the session, which never actually happened. Like atmaxinger says, we'd basically need a kind of "distributed transaction", so that we can do the IMO the best option would still be append-only session recording. Every
To be clear: This is at the cost of permitting only on concurrent modification of the entire global KDB, no matter how unrelated the modifications are. Currently, conflicts only happen when two processes write to the same file. With the recording lock, even write to different files have to be done one after the other. IMO it is an acceptable trade-off, but it has to be clearly documented. For short-lived recording sessions it's fine to limit the entire system to a single |
I don't think "distributed transaction" would be necessary, we can simply fail if two kdbSet are executed at the same time? (Like we already do.)
The commit is a sequence of
Actually a journal could even improve the current situation, as then we could recover from failing commits. But to keep the status quo would be okay (that commits might fail and we simply keep it as unlikely as possible). I think the implementation of a journal is way out of scope for 1.0.
Sounds like an easy fix. But this would we get rid of locks completely this way? As you know, the problem with locks is that they create an unpredictable long delay inside of
I mean that you get the lock, do everything fine for recording but overall Imho, ideal would be that the session recording file gets prepared before commit, and during commit it simply gets moved like all the other configuration files. |
Btw. a decision would have been a big advantage here, we are discussing this way too late. |
Not sure, sounds like without the lock there would still be a possibility of having either (1) changes recorded in the session that were aborted or (2) permanently commit changes that are not recorded in the session. Also the session is recorded via an entirely separate
True, but currently a (file-based) backend (using
With the current recording setup, everything shares a single backend for the recording session. So it's not that simple, two simultaneous
We had multiple decision PRs with long discussions. IIRC back then the you said, implementation details should be discussed later. IMHO what we did here is absolutely the correct approach. The only way to get a good solution for something as complex as this is attempting a solution and iterating on it. |
I'm curious: could you point me to those lock plugins? I just took a quick look and didn't find any.
Originally posted by @atmaxinger in #4892 (comment)
The text was updated successfully, but these errors were encountered: