Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

creating bgs splitter daemon #388

Draft
wants to merge 5 commits into
base: main
Choose a base branch
from
Draft

creating bgs splitter daemon #388

wants to merge 5 commits into from

Conversation

whyrusleeping
Copy link
Collaborator

No description provided.

func (rc *ringChunk) events() []*events.XRPCStreamEvent {
rc.lk.Lock()
defer rc.lk.Unlock()
return rc.buf
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this need to be a copy of the buffer?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nope, the buffer is append-only, and this operation takes a copy of the slice, which means we can read from it in a threadsafe manner all of the data that exists in the slice at this point in time. Future appends will not impact this slice we've grabbed (even if they end up triggering a resize of the slice, the old memory is still valid and safe to read from)

log.Errorf("events playback: %s", err)
lastSeq := *since
// run playback to get through *most* of the events, getting our current cursor close to realtime
if err := em.persister.Playback(ctx, *since, func(e *XRPCStreamEvent) error {
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in my 'ringbuf' persister implement in this PR, I take care to get very close to 'live'.
That is not necessarily (yet) the case for the disk persister, and im not sure where i want that responsibility to lie. Should the caller just call it until they stop getting new events? or should the persister try to return all events it knows about up until the point that it returns?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it might make sense for the persister to make an effort to playback more events but I don't think it should aim for 100% if it's too complex. Maybe it would make sense to check the logfile refs if we're on the last logfile we know about and then keep looping if there are more files now until there aren't. If we had larger logfiles I'd be less concerned about the Persister getting stuck trying to catch a user up since 100k event logfiles would take a lot longer to roll over etc.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the aim for the DiskPersister should be to playback until the user is in the current logfile though, that shouldn't be too hard.

@ericvolp12
Copy link
Collaborator

Events changes look okay to me on first pass

whyrusleeping added a commit that referenced this pull request Oct 31, 2023
This is all the refactors and fixes from #388, but without the new
splitter daemon code. should get this merged asap
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants