Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Concurrent access to lru.Cache #87

Closed
abennett opened this issue Aug 29, 2017 · 3 comments
Closed

Concurrent access to lru.Cache #87

abennett opened this issue Aug 29, 2017 · 3 comments

Comments

@abennett
Copy link

It doesn't look like lru.Cache is safe for concurrent access. Is this correct? If so, would you be opposed to me adding in a mutex?

I'm new to open source contribution and this looks like it might be an easy win.

@abennett
Copy link
Author

What would be the downsides of replacing the current map with the new sync.Map?

@ghost
Copy link

ghost commented Oct 22, 2017

Hi,

I hope I am not being redundant in responding but I notice that this thread has been silent for a bit.

From looking in groupcache.go, the lru is only used in a thread-safe manner. So the LRU leaves it's locking policy up to the calling code. I think that's reasonable, because then double-locks can be avoided - there's no question of who has to implement the locking, or where it should go in the code.

There are a couple of resources on sync.Map:
https://www.youtube.com/watch?v=C1EtfDnsdDs
https://medium.com/@deckarep/the-new-kid-in-town-gos-sync-map-de24a6bf7c2c

So, I guess the question is: do you have a measured cache contention problem in groupcache? :-)

Cheers,

@abennett
Copy link
Author

abennett commented Nov 2, 2017

Not redundant at all, and your comments make sense. I am satisfied. Thank you!

@abennett abennett closed this as completed Nov 2, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant