Skip to content

[V6˖] Dealing with cache slams

Georges.L edited this page Dec 11, 2023 · 3 revisions

⚠️ As of the V9.2 the preventCacheSlams and cacheSlamsTimeout options is deprecated.
Then it will be removed from non files-based drivers as of v10.
The reason is that moderns drivers likes Redis, Memcached, Couchbase do well handle concurrency.
This option has been made only for files-based drivers that have low performances and will be now limited to those drivers: Files, Sqlite, and Leveldb.

Sometimes you are facing with a known issue: The cache slams.

Basically the cache slams occurs when you have multiple concurrent requests that attempt to write the same big set of data into the cache. This operation can take some time, during this time a concurrent request (#2) tries to get the same cache item which is still not fully written by the request #1.

Here's a quick schema to visualize the issue (without cache slams protection) with only 2 concurrent requests:

Request #1: Connecting...
Request #1: Attempting to get the item "test1"
Request #1: The item "test1" did not exists
Request #1: Working to build the "test1" cache item with heavy database/webservice operation.
Request #2: Connecting...
Request #2: Attempting to get the item "test1"
Request #1: Work done, attempting to write the huge dataset into the cache, this may take a while...
Request #1: The cache starts to write the dataset onto the cache
Request #2: The item "test1" did not (yet) exists or is interpreted as corrupted as the request #1 did not finised the write operation
Request #2: Working to build the "test1" cache item with heavy database/webservice operation.
Request #1: Finished to write the item "test1" into the cache
Request #2: Work done, attempting to write the huge dataset into the cache, this may take a while...
Request #2: The cache starts to write the dataset onto the cache
Request #1: Finalizes the request and serves the content
Request #1: Exiting...
Request #2: Finished to write the item "test1" into the cache
Request #2: Finalizes the request and serves the content
Request #2: Exiting...

See what happened ? The cache has been overwritten immediately due to the high spend time of the request #1 used to write into the cache.

To reduce significantly this risk, the V6 introduces an option called preventCacheSlams (disabled by default). This option aims to informs the concurrent requests that a "write operation" is in progress by writing a small "batch entity" into the cache before writing the huge set of data. The concurrent requests when trying to work on the same cache item will receive a "batch entity" that tell them to wait for a specified time configurable with the option cacheSlamsTimeout

The code is pretty easy to implements

use phpFastCache\CacheManager;

$driverInstance = CacheManager::getInstance('Files', [
  'preventCacheSlams' => true,
  'cacheSlamsTimeout' => 20
]);

Now, here's a quick schema to visualize the issue (with cache slams protection) with only 2 concurrent requests:

Request #1: Connecting...
Request #1: Attempting to get the item "test1"
Request #1: The item "test1" did not exists
Request #1: Working to build the "test1" cache item with heavy database/webservice operation.
Request #2: Connecting...
Request #2: Attempting to get the item "test1"
Request #1: Work done, attempting to write the huge dataset into the cache, this may take a while...
Request #1: + The cache write a batch entity
Request #1: The cache starts to write the dataset onto the cache
Request #2: The item "test1" is a batch entity, telling us to wait a bit...
Request #2: + Waiting....
Request #1: Finished to write the item "test1" into the cache
Request #2: + End of waiting (there's no batch entity anymore, but a dataset), getting the dataset wrote by request #1
Request #1: Finalizes the request and serves the content
Request #1: Exiting...
Request #2: Finalizes the request and serves the content
Request #2: Exiting...

See what happened ? The cache hasn't been overwritten because the request #2 waited the request #1 to finish the write operation. This operation added an additional quick write operation (the batch entity), but we gained a lot of resources for the following requests. However this operation isn't risk-less. This will made all concurrent requests to wait the request #1 to finish the write operation. For this reason you must be very careful with the cacheSlamsTimeout value.

  • Too high: This option can block your whole project for a while, especially if the request #1 crashed for no reason after writing the batch entity.
  • Too low: You may facing the cache slams by hitting the cacheSlamsTimeout too quickly

For file-based you may need to enable the secureFileManipulation option which enforces a strict (but safer) I/O manipulation policy.