-
Notifications
You must be signed in to change notification settings - Fork 87
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
creating bgs splitter daemon #388
base: main
Are you sure you want to change the base?
Conversation
splitter/splitter.go
Outdated
func (rc *ringChunk) events() []*events.XRPCStreamEvent { | ||
rc.lk.Lock() | ||
defer rc.lk.Unlock() | ||
return rc.buf |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this need to be a copy of the buffer?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nope, the buffer is append-only, and this operation takes a copy of the slice, which means we can read from it in a threadsafe manner all of the data that exists in the slice at this point in time. Future appends will not impact this slice we've grabbed (even if they end up triggering a resize of the slice, the old memory is still valid and safe to read from)
log.Errorf("events playback: %s", err) | ||
lastSeq := *since | ||
// run playback to get through *most* of the events, getting our current cursor close to realtime | ||
if err := em.persister.Playback(ctx, *since, func(e *XRPCStreamEvent) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in my 'ringbuf' persister implement in this PR, I take care to get very close to 'live'.
That is not necessarily (yet) the case for the disk persister, and im not sure where i want that responsibility to lie. Should the caller just call it until they stop getting new events? or should the persister try to return all events it knows about up until the point that it returns?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it might make sense for the persister to make an effort to playback more events but I don't think it should aim for 100% if it's too complex. Maybe it would make sense to check the logfile refs if we're on the last logfile we know about and then keep looping if there are more files now until there aren't. If we had larger logfiles I'd be less concerned about the Persister getting stuck trying to catch a user up since 100k event logfiles would take a lot longer to roll over etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the aim for the DiskPersister should be to playback until the user is in the current logfile though, that shouldn't be too hard.
Events changes look okay to me on first pass |
This is all the refactors and fixes from #388, but without the new splitter daemon code. should get this merged asap
No description provided.