Skip to main content

Caching Patterns: pros and cons

Caching is a very useful tool to increase the performance of your service. Done properly, caching can reduce significant load on your underlying databases. There are several different patterns like Read Through Cache, Write through Cache etc. Each of them has its own benefits and pitfalls that need to be kept in mind.

  1. Read Through Cache: Read through cache is simplest caching pattern. In read-through caching, data is always served by the cache. If data is not present in cache, the cache hits the underlying database and serves the data to the requester.
Read Through Caching
    2. Read Aside Caching: Read aside caching is similar to Read Through Cache. The key difference is         that, the client tries to get the data from cache first. If its a cache miss, then the client requests the             underlying database and loads the data into Cache.
Read Aside Caching
    Both of these are simplest patterns to implement which can help your application scale up and make        response times faster. However, there are a few things that need to be kept in mind while using these        patterns:
  • Cold Start problem: Initially, when your application starts up or the cache is being scaled up, there would be no data in cache and all the requests would be served through the database. This can lead to performance issues in the beginning. To mitigate this, you can write a warm up script that loads the data into your cache before starting to serve any requests.
  • Thundering Herd problem: When many of the clients request for the same resource simultaneously and don’t find it into the cache, multiple requests can be sent to the database for the resource. The result is unnecessary load on database and timeouts on client end. This can be avoided by using a lock on the cache end, so that only request is sent to the database.
    3. Write Through Cache: In write through caching, data is stored in the cache and the database at the         same time. Confirmation is sent to the client, only after write is successful at both places. This pattern         is good for the applications where data is updated sparingly but read very frequently.
Write Through Caching
    4. Write Back Cache: In write back caching, data is stored in the cache only and then confirmation is         sent to the client. Data is written to the backing store using a background async process. This pattern is      useful to provide low latency and high throughput for write intensive applications. However, you have      to be careful as there can be data loss, if your cache crashes before data is written to the backing store.
Write Back Caching
    We can use these patterns in conjunction with each other as well, to get benefits of both. For example,        write through caching can be used with read through caching to mitigate the cold start problem as well      as increase the cache hit rate.

Comments

Popular posts from this blog

Monitoring Spring Boot API response times with Prometheus

Prometheus is a very useful tool for monitoring web applications. In this blog post, we will see how to use it to monitor Spring Boot API response times. You have to include following dependencies in your build.gradle file: compile group: 'io.prometheus', name: 'simpleclient_hotspot', version: '0.0.26' compile group: 'io.prometheus', name: 'simpleclient_servlet', version: '0.0.26' compile group: 'io.prometheus', name: 'simpleclient', version: '0.0.26' Now you will have to expose a Rest Endpoint, so that Prometheus can collect the metrics by scraping it at regular intervals. To do that, you would have to include these Java configuration classes: @Configuration @ConditionalOnClass(CollectorRegistry.class) public class Config { private static final CollectorRegistry metricRegistry = CollectorRegistry. defaultRegistry ; @Bean ServletRegistrationBean registerPrometheusExporterServlet() { retu...

Redis Pipelines and Transactions with Golang

Redis is an in memory datastore mostly used as a cache. Clients can send commands to server using TCP protocol and get the response back. So, usually a request works like this. Client sends a request to server and waits for the response to be sent back. Server processes the command and writes the response to the socket for client to read. Sometimes, in application flow we have to process multiple keys at once. In this case, for each request there will be a network overhead for the round trip between server and client. We can reduce this network overhead by sending commands to the Redis server in a batched manner and then process all the responses at once. This can be achieved using pipelines as well as transactions. Pipelining in Redis is when you send all the commands to the server at once and receive all the replies in one go. It does not provide any guarantees like no other commands will be processed between your pipelined commands. Transactions in Redis are meant to be ...

Monitoring Go micro services using Prometheus

In this age of web scale architecture, Golang has become the language of choice for many developers to implement high throughput micro services. One of the key components of running and maintaining these services is to be able to measure the performance. Prometheus is a time series based database which has become very popular for monitoring micro services. In this blog, we will see how to implement monitoring for your Go micro service using prometheus. We will be using the official Prometheus library github.com/prometheus/client_golang/prometheus/promhttp to expose the go metrics. You can use Promhttp library’s HTTP handler as the handler function to expose the metrics. package main import ( "github.com/gorilla/mux" "github.com/prometheus/client_golang/prometheus/promhttp" "net/http" ) func main() { router := mux.NewRouter() router.Handle( "/metrics" , promhttp.Handler()) http.ListenAndServe( ":8080" , router ) }...