Redis re-hashes occurring at 3Gi Memory Usage. Why? #12964
Unanswered
WillNilges
asked this question in
Q&A
Replies: 1 comment
-
Current working theory is that the hash table algorithm Redis uses is, infact, looking at the number of keys instead of the amount of memory used, and so because we're storing the same amount of keys in fewer bytes of RAM, we see the re-hash at 3Gi instead of 4Gi. Which would mean that the behavior is still the same, but more performant, due to the omission of the workload that comes with TTL keys. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
👋🏻 Hey all, I'm trying to optimize the performance of one of our caches.
Some background, I've been working with vyaramaka, who wrote this issue.
TL;DR on that issue is that our caches are pretty big (like 30-50M keys each on 20-ish instances) and we hit a failure mode where Redis wants to re-hash at 4Gi of memory usage. Problem is that we use TTL on all values in the cache, and so the workload of expiring keys + serving requests + a rehash = Redis is pegged for minutes at a time.
To avoid that problem, we figured switching to primarily an LRU cache, and not setting a TTL on most keys would lighten the workload on Redis and eliminate or at least reduce the problem.
Good news is that we have seen improvements! Re-hashes no longer peg the CPU.
But, that brings me to my question: We noticed that with this setup, the rehashes occur at 3Gi instead of 4Gi, and in fact, re-hashes won't occur after that up to 7Gi. Why???
We took a look at the # of keys in the cache for both our vanilla setup and this new "Selective TTL" setup as I'm calling it, and when the re-hashes occur, the number of keys is 33.54M for vanilla, and 33.17M for Selective TTL (roughly(tm) the same)
I've been able to reproduce this result 100% of the time (again, good news!), so I think there's just some machination of Redis that I don't understand. I thought that re-hashing might have been controlled by the amount of memory that the internal hash table was using, but is it instead amount of keys?
If anyone knows, I'd really appreciate an answer to this riddle. Thank you! 🙇🏻
Beta Was this translation helpful? Give feedback.
All reactions