Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Disk Persistence: call fsync every time? #267

Open
Steamgjk opened this issue Oct 12, 2021 · 3 comments
Open

Disk Persistence: call fsync every time? #267

Steamgjk opened this issue Oct 12, 2021 · 3 comments

Comments

@Steamgjk
Copy link

Hi, @greensky00
In the implementation of Nuraft, does it call fsync everytime it persists the log, or it just flushes to the OS?
Last time while I was playing with Nuraft, under a low load, the latency is only about ~400us.
But today I made a microbench test of fsync on my VM, the median latency is about 2ms for each completion of fsync. This makes me suspect that Nuraft does not call fsync for each request, so I wan to confirm it with you: As a follower, when it replies to the leader's append action, does it call fsync every time, or it has other batching design? [You know, in Diego's ATC paper, it requires the replica to persist the log everytime before replying to the Append RPC]

@greensky00
Copy link
Contributor

@Steamgjk
As I mentioned here #258 (comment) it depends on log store implementation so that it is totally up to you. Your log store can do fsync for every log write, or can do fsync only once for each batch if you put it into end_of_append_batch:

virtual void end_of_append_batch(ulong start, ulong cnt) {}

@Steamgjk
Copy link
Author

Steamgjk commented Oct 12, 2021

@Steamgjk As I mentioned here #258 (comment) it depends on log store implementation so that it is totally up to you. Your log store can do fsync for every log write, or can do fsync only once for each batch if you put it into end_of_append_batch:

virtual void end_of_append_batch(ulong start, ulong cnt) {}

@greensky00
In that way, If I just run it in a default setting, that mean, I am not doing persistence, right?

While your are generating the bench result, are you doing fsync every time? [I guess you are not doing fsync either, because the latency will be millsecond-level with fsync every time]

@greensky00
Copy link
Contributor

@Steamgjk
Note that log store (and state machine, state manager) is not the part of NuRaft, there is no such "default mode". In-memory log store is just an example to show how to implement it, and it should not be used in real use cases.

The purpose of benchmark test is to measure the pure performance of NuRaft as mentioned here
https://github.com/eBay/NuRaft/tree/master/tests/bench
and of course it does not include any disk related overhead like fsync.

The reason why is the performance of log store and state machine will vary greatly according to their implementation and many others. But these are not the performance of NuRaft -- NuRaft provides "interface" only.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants