Skip to content
Robbie Hanson edited this page Nov 10, 2019 · 6 revisions

Each YapDatabaseConnection has its own dedicated cache. It provides caching at the object level. This cache allows you to skip both disk IO, and the overhead of deserialization.

 

Flexibility

You have complete control over the cache at all times. From YapDatabaseConnection:

/// Each database connection maintains an independent cache of deserialized
/// objects. This reduces both disk IO and the overhead of the
/// deserialization process. The cache is properly kept in sync with the atomic
/// snapshot architecture of the database system.
/// 
/// You can optionally configure the cache size, or disable it completely.
/// By default the objectCache is enabled and has a limit of 250.
///
/// You can configure the objectCache at any time,
/// including within transactions.
/// 
/// To disable the object cache entirely, set objectCacheEnabled to false.
/// To use an inifinite cache size, set the objectCacheLimit to zero.

var objectCacheEnabled: Bool { get set }
var objectCacheLimit: UInt { get set }

var metadataCacheEnabled: Bool { get set }
var metadataCacheLimit: UInt { get set }

You can manage the cache for the objects & metadata separately.

You can also configure the cache limits from within transactions. If you're about to do a bunch of processing which may involve looping over a large number of objects multiple times, you can temporarily increase the cache size, and then decrease it again when you're done.

 

Concurrency

The caches of each connection are integrated deep into the architecture. Every transaction provides an atomic snapshot of the database, and the caches are automatically synchronized with the snapshot.

For example, if you make changes to an object on connectionA, then those changes are automatically picked up by the cache of connectionB, as soon as it moves to the latest commit/snapshot. In other words, everything just works.

 

Performance

We weren't satisfied with the performance of NSCache. We knew we could make something faster. And so we did. In fact, we've benchmarked our cache at up to 85% faster than NSCache on an iPhone 5. (The benchmark code is included in the project if you want to run it yourself.)

But that's not all we improved upon. One of the things we didn't like about NSCache was its memory consumption. Although one can configure the cache with a countLimit, it doesn't strictly enforce it. NSCache has "various auto-removal policies", which means it evicts items when it darn-well feels like it, regardless of how you configure it. This is concerning from a memory management and memory footprint perspective. That's why our built-in cache strictly obeys the limits you set. And furthermore, it tracks the order in which objects are accessed, so it always evicts the least-recently used object.

Long story short: better performance, predictable memory footprint