Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to deal with replication lag? #257

Open
pouya-eghbali opened this issue Nov 7, 2019 · 9 comments
Open

How to deal with replication lag? #257

pouya-eghbali opened this issue Nov 7, 2019 · 9 comments

Comments

@pouya-eghbali
Copy link

My after.update hooks return the old document because of replication lag (meteor writes the update to primary, collection hooks reads the document from secondary before it gets a chance to synchronize with primary).

How to deal with this? I cannot simply add a timeout to my hooks, that won't solve the issue.

@StorytellerCZ
Copy link
Member

Yeah, timeouts are not a solution. With a quick look I'm wondering if #256 might potentially fix the issue without having to create a workaround.

@pouya-eghbali
Copy link
Author

I checked #256 , I doubt if it helps with this situation but can give it a try. I can test and report tonight.

@pouya-eghbali
Copy link
Author

Ok I tested #256 just now and it does not solve the issue I'm facing. As a temporary solution I'll just add a timeout to my fork. I'll try to find a real solution. Any suggestions? Ideas? Does anyone know any way to fetch documents from primary?

@StorytellerCZ
Copy link
Member

@zimme @sebakerckhof

@sebakerckhof
Copy link
Contributor

Wouldn't this need to be solved by setting appropriate write/read concerns? Of course this comes with a performance cost...

@pouya-eghbali
Copy link
Author

Yeah, that'll solve the issue. However, I'll need to define a global write concern (couldn't find a way to set write concern for insert/update in meteor).

I'm fine with the performance cost, in my case faster read is more important compared to faster write. But then if one of the secondaries die the write operation returns with an error. I'm not a mongo expert, that's what I understood from mongo docs, and a few questions on stackoverflow.

@SimonSimCity
Copy link
Member

SimonSimCity commented Nov 11, 2019

The related ticket in the meteor repository (meteor/meteor#10443) might also be worth a look.

@pouya-eghbali
Copy link
Author

readPreference: 'primary',

Suggested in meteor/meteor#10443 might solve the issue, but the entire point of having secondaries, at least in my case, is to read from secondaries.

Anyways, thank you all for your suggestions, I ended up making this. Posting here for reference.

@evolross
Copy link
Contributor

evolross commented Apr 2, 2020

I think I just ran into this issue too. I updated my production app's MONGO_URL to use readPreference=nearest because I'm trying to add support for my US-based us-east-1 Galaxy app in Europe by deploying to the European Galaxy on eu-west-1 (see Meteor forum post about issues). So I added a read-only node on Atlas in eu-west-1 for my European users to read.

It looks like this stopped doc from being up-to-date in my after.update hooks. Which consequently prevented only some of my users from upgrading in production... yikes!

I'm seeing quite a few errors around the app because of this. Perhaps updating my hooks to get their values from modifier instead of doc where possible is one work-around.

One weird thing is after adding readPreference=nearest this breakage with doc started happening to US users as well. Before the DB URL update to my app last night to include readPreference=nearest doc was always fine. I thought adding readPreference=nearest would just make European-based users find that new read-only node I added in eu-west-1. But apparently it's making all users find their nearest node which for many is out of date due to replication. Is this what happens when adding readPreference=nearest?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants