Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

disk write and leader timeout #75

Open
skypexu opened this issue May 25, 2018 · 4 comments
Open

disk write and leader timeout #75

skypexu opened this issue May 25, 2018 · 4 comments

Comments

@skypexu
Copy link
Contributor

skypexu commented May 25, 2018

synchronously writing log to disk can blocks leader's main loop, and causes it to timeout, then the cluster has to reelect a new leader, this impacts cluster availability, I think in leader, there should have a thread to flush log to disk, an index field indicating at which point the log is flushed, the the committed index should be based on this, for follower, log is synchronously written.

--
update:

also it could be better to apply log in asynchronous manner to not block main loop.

@vgfree
Copy link

vgfree commented May 25, 2018

I suggest that the library need to provide asynchronous interfaces to write operations.
It is also best to provide batch write interfaces, because one by one is too slow.

such as:

typedef struct
{
/** number of entries within this batch */
int n_entries;

/** array of entries within this batch */
raft_entry_t* entries[0];

} raft_batch_t;

int log_append_entry(log_t* me_, raft_batch_t* bat)
{
log_private_t* me = (log_private_t*)me_;
int e;

e = __ensurecapacity(me, bat->n_entries);
if (e != 0)
    return e;

for (int i = 0; i < bat->n_entries; i ++) {
        int back = (me->back + i) % me->size;
        memcpy(&me->entries[back], bat->entries[i], sizeof(raft_entry_t));
}

void* ud = raft_get_udata(me->raft);
if (me->cb) {
        if (me->cb->log_offer_batch) {
                int idx = me->base + me->count + 1;
                e = me->cb->log_offer_batch(me->raft, ud, bat, idx);
                if (0 != e)
                        return e;
                for (int i = 0; i < bat->n_entries; i ++) {
                        idx = me->base + me->count + 1 + i;
                        raft_offer_log(me->raft, bat->entries[i], idx);
                }
        } else if (me->cb->log_offer) {
                for (int i = 0; i < bat->n_entries; i ++) {
                        int idx = me->base + me->count + 1 + i;
                        e = me->cb->log_offer(me->raft, ud, bat->entries[i], idx);
                        if (0 != e)
                                return e;
                        raft_offer_log(me->raft, bat->entries[i], idx);
                }
        }
}

me->count += bat->n_entries;
me->back += bat->n_entries;
me->back = me->back % me->size;

return 0;

}

@freeekanayaka
Copy link
Contributor

I'm not entirely sure @skypexu and @vgfree intended it as well, but it'd be nice to also have the leader not wait for its own log to be written and synced to disk, effectively performing its own local AppendEntries in parallel with the network AppendEntries RPCs. The matchedIndex array should include the leader itself, and the leader should update it when it's done with its local disk write. This eliminates a disk write from raft's critical path. Note that an entry can be then considered committed even before the leader finishes writing it to its own local disk, if a majority of followers respond that they have appended it.

See "10.2.1 Writing to the leader’s disk in parallel" in Ongaro's Consenus: bridging theory and practice dissertation. His LogCabin implementation of raft uses this optimization.

@freeekanayaka
Copy link
Contributor

@vgfree regarding batch interfaces feature request, I'd say it's essentially tracked by #16.

@skypexu
Copy link
Contributor Author

skypexu commented May 22, 2019

I'm not entirely sure @skypexu and @vgfree intended it as well, but it'd be nice to also have the leader not wait for its own log to be written and synced to disk, effectively performing its own local AppendEntries in parallel with the network AppendEntries RPCs. The matchedIndex array should include the leader itself, and the leader should update it when it's done with its local disk write. This eliminates a disk write from raft's critical path. Note that an entry can be then considered committed even before the leader finishes writing it to its own local disk, if a majority of followers respond that they have appended it.

See "10.2.1 Writing to the leader’s disk in parallel" in Ongaro's Consenus: bridging theory and practice dissertation. His LogCabin implementation of raft uses this optimization.

Yes, I agree it should write to leader's disk in parallel.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants