Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why doesn't bigcache consider designing expiration time for each cache key? #360

Open
zsyu9779 opened this issue Apr 4, 2023 · 4 comments
Labels

Comments

@zsyu9779
Copy link

zsyu9779 commented Apr 4, 2023

During the process of using and reading the source code, it was found that in each instance of bigcache, all key-value pairs can only be set with a unified expiration time. A time.Ticker is used to periodically check if the cache has expired based on the interval set by CleanWindow. So why wasn't a method considered to allow setting expiration time for each cache key?

@janisz
Copy link
Collaborator

janisz commented Apr 5, 2023

So why wasn't a method considered to allow setting expiration time for each cache key?

While it is technically feasible to implement such a feature, it could introduce complexity to the cache system. The bigcache library is based on a circular buffer (i.e., queue.BytesQueue) and does not have a defragmentation process. If we allow every key to be removed at different times, it could result in a significant amount of unused memory. On the other hand, we do allow the removal of a single key, but it is more like marking it as obsolete rather than actually deleting it.

@janisz janisz added the question label Apr 5, 2023
@zsyu9779
Copy link
Author

zsyu9779 commented Apr 6, 2023

So why wasn't a method considered to allow setting expiration time for each cache key?

While it is technically feasible to implement such a feature, it could introduce complexity to the cache system. The bigcache library is based on a circular buffer (i.e., queue.BytesQueue) and does not have a defragmentation process. If we allow every key to be removed at different times, it could result in a significant amount of unused memory. On the other hand, we do allow the removal of a single key, but it is more like marking it as obsolete rather than actually deleting it.

Regarding the problem of memory waste, can we consider designing a compression algorithm? That is, when operating the bytesQueue, record the number of expired keys, and when the number of expired keys reaches a certain threshold, start a compression algorithm to asynchronously create a new, expanded bytesQueue. Then move the data from the old bytesQueue to the new one. During this process, read and write requests are synchronized between the two bytesQueues.

@zsyu9779
Copy link
Author

zsyu9779 commented Apr 6, 2023

So why wasn't a method considered to allow setting expiration time for each cache key?

While it is technically feasible to implement such a feature, it could introduce complexity to the cache system. The bigcache library is based on a circular buffer (i.e., queue.BytesQueue) and does not have a defragmentation process. If we allow every key to be removed at different times, it could result in a significant amount of unused memory. On the other hand, we do allow the removal of a single key, but it is more like marking it as obsolete rather than actually deleting it.

Regarding the problem of memory waste, can we consider designing a compression algorithm? That is, when operating the bytesQueue, record the number of expired keys, and when the number of expired keys reaches a certain threshold, start a compression algorithm to asynchronously create a new, expanded bytesQueue. Then move the data from the old bytesQueue to the new one. During this process, read and write requests are synchronized between the two bytesQueues.

However, this approach does seem to introduce some complexity to the entire system :(

@janisz
Copy link
Collaborator

janisz commented Apr 6, 2023

In my opinion, we need to carefully consider the trade-off between complexity and usability when evaluating changes to our system. While adding new features can improve functionality, it can also introduce greater maintenance complexity. It's worth exploring other solutions available in the market for in-memory caching, as they may offer better performance or better align with specific requirements. For instance, the Bigcache solution is particularly well-suited for static maps that are not frequently modified after the application startup.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants