Caching is a very useful tool to increase the performance of your service. Done properly, caching can reduce significant load on your underlying databases. There are several different patterns like Read Through Cache, Write through Cache etc. Each of them has its own benefits and pitfalls that need to be kept in mind.
- Read Through Cache: Read through cache is simplest caching pattern. In read-through caching, data is always served by the cache. If data is not present in cache, the cache hits the underlying database and serves the data to the requester.

2. Read Aside Caching: Read aside caching is similar to Read Through Cache. The key difference is that, the client tries to get the data from cache first. If its a cache miss, then the client requests the underlying database and loads the data into Cache.

Both of these are simplest patterns to implement which can help your application scale up and make response times faster. However, there are a few things that need to be kept in mind while using these patterns:
- Cold Start problem: Initially, when your application starts up or the cache is being scaled up, there would be no data in cache and all the requests would be served through the database. This can lead to performance issues in the beginning. To mitigate this, you can write a warm up script that loads the data into your cache before starting to serve any requests.
- Thundering Herd problem: When many of the clients request for the same resource simultaneously and don’t find it into the cache, multiple requests can be sent to the database for the resource. The result is unnecessary load on database and timeouts on client end. This can be avoided by using a lock on the cache end, so that only request is sent to the database.
3. Write Through Cache: In write through caching, data is stored in the cache and the database at the same time. Confirmation is sent to the client, only after write is successful at both places. This pattern is good for the applications where data is updated sparingly but read very frequently.

4. Write Back Cache: In write back caching, data is stored in the cache only and then confirmation is sent to the client. Data is written to the backing store using a background async process. This pattern is useful to provide low latency and high throughput for write intensive applications. However, you have to be careful as there can be data loss, if your cache crashes before data is written to the backing store.

We can use these patterns in conjunction with each other as well, to get benefits of both. For example, write through caching can be used with read through caching to mitigate the cold start problem as well as increase the cache hit rate.
Comments
Post a Comment