Skip to content

Faster Payments with In Memory Cache

Vedant Khairnar edited this page Jun 29, 2023 · 1 revision

By Karthikey Hegde & Abhishek Marrivagu

Paying with cache

In a transaction system such as payments, the main factor that contributes to increased latency and occasional downtimes are the database calls. Therefore, it is crucial to minimize the number of these calls as much as possible. One common approach to achieve this would be caching with redis which involves serializing and transmitting data over the network and getting it through the network and deserializing while fetching. However, this process also introduces additional latency. As a result, employing an in-memory cache can significantly enhance the system's performance.

How do we implement this? (Rust)

A common way to implement this is having a global static hashmap in the binary but the catch is that Rust’s type system doesn’t allow you to insert different types into the hashmap.

So the natural way to go about solving this is to serialize the data into a string and insert it into the hash map to then deserialize the data (while fetching) into the respective type. While this is a feasible solution, it’s a CPU intensive task.

To bend the rules and store any kind of type, we would need a generic interface. That’s where the Any trait from the standard library allows us to dynamically typecast any kind of type and allows you to downcast it into any type. Since it only involves comparing type_id and an allocation on the heap, it’s not a CPU intensive task. We saw a performance improvement of almost 5x for a sample data. In the ser-de kind of cache we observed a quadratic increase with data size while this method maintained a linear increase.

Here are the Cargo benchmarks for both the methods.



How does it work?

A generic cache type trait and Cache Struct

To achieve this, we can utilize a generic trait called Cacheable. By implementing this trait, we can ensure that our cache is both thread-safe (labeled with the Send and Sync traits) and that the data inside it can be copied (using the Clone trait). This approach allows us to have flexibility in determining the type of data we store in the cache, giving us the freedom to choose as we see fit.

Getting the cache

When getting it from the cache, we downcast the value which casts the value in the heap to the given type. This process is really fast and not CPU heavy.

The Any trait with downcast gives you a safe and fast implementation of type agnostic behavior.

Invalidation of Cache in multi-pod kubernetes system


Cache should be always synced with the database changes and only the data that’s been fetched frequently should be available in the cache. However, while doing this in a multi pod kubernetes environment, you need to go through every pod and invalidate the key stored in the cache. We used the Redis PubSub channel to do this.

So every pod where the application is deployed will subscribe to the common invalidate channel. And every time some value gets updated we will publish the key through the pubsub interface and the pod which has subscribed to it will get the key and invalidate it.

In conclusion, cache plays an important role in a transactional system to keep it fast and robust. A well designed cache requires a lot of consideration including cache invalidation, sync with the database. This was just our own implementation of cache which is helping keep our system robust. Would love to hear how you’d go about it.