Skip to main content

Monitoring Go micro services using Prometheus

In this age of web scale architecture, Golang has become the language of choice for many developers to implement high throughput micro services. One of the key components of running and maintaining these services is to be able to measure the performance. Prometheus is a time series based database which has become very popular for monitoring micro services. In this blog, we will see how to implement monitoring for your Go micro service using prometheus.

We will be using the official Prometheus library github.com/prometheus/client_golang/prometheus/promhttp to expose the go metrics. You can use Promhttp library’s HTTP handler as the handler function to expose the metrics.

package main

import (
   "github.com/gorilla/mux"
   "github.com/prometheus/client_golang/prometheus/promhttp"
   "net/http")

func main() {
   router := mux.NewRouter()
   router.Handle("/metrics", promhttp.Handler())
   http.ListenAndServe(":8080", router )
}

This minimal example will expose the default application metrics to localhost:8080/metrics. Even with this bare bones code, we get some very useful system metrics like threads, go routines, memory and GC metrics.
Next step in performance monitoring is to instrument custom metrics from your application. Some of the most important custom metrics that you need to monitor are response times and error rates of your API endpoints. We will write a middleware which will automatically instrument all of our application endpoints for us. We will be using Prometheus Summary objects to monitor our response times, since we are interested in not just averages but also 95th and 99th percentile response times of our application.
package middleware

import (
   "github.com/prometheus/client_golang/prometheus"
   "log"
   "net/http"
   "time")

type PrometheusMiddleware struct {
   Summary prometheus.Summary
}

func NewPrometheusMiddleware(summary prometheus.Summary) *PrometheusMiddleware {
   return &PrometheusMiddleware{
      Summary: summary}
}

func (m *PrometheusMiddleware) PromeMiddleware(next http.Handler) http.Handler {
   return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
      start := time.Now()
      log.Println(r.RequestURI)
      // Call the next handler, which can be another middleware in the chain, or the final handler.
      next.ServeHTTP(w, r)
      m.Summary.Observe(float64(time.Since(start).Seconds()))
   })
}

We need to pass a slice containing names of routes that we want to monitor and InitMiddleWares function would return the map of Route to Summary.

package metrics

import (
   "github.com/prometheus/client_golang/prometheus"
   "github.com/prometheus/client_golang/prometheus/promauto"
   "prometheus-blog/middleware")

func InitMiddleWares(routes []string) map[string]*middleware.PrometheusMiddleware {
   routeMiddlewareMap := map[string]*middleware.PrometheusMiddleware{}
   for _, route := range routes {
      routeMiddlewareMap[route] = &middleware.PrometheusMiddleware{Summary: promauto.NewSummary(prometheus.SummaryOpts{
         Name:      route,
         Namespace: "prometheus_blog",
      })}
   }
   return routeMiddlewareMap
}

The updated server.go file looks like this:

package main

import (
   "fmt"
   "github.com/gorilla/mux"
   "github.com/prometheus/client_golang/prometheus/promhttp"
   "net/http"
   "prometheus-blog/metrics"
   "prometheus-blog/middleware")

var middlewareMap map[string]*middleware.PrometheusMiddleware

func main() {
   routes := []string{"exampleEndpoint"}
   middlewareMap = metrics.InitMiddleWares(routes)
   router := mux.NewRouter()
   router.Handle("/metrics", promhttp.Handler())
   router.Handle("/api/v1/prometheus/example",
      middlewareMap["exampleEndpoint"].PromeMiddleware(GetExamplehandler())).Methods("GET")
   err := http.ListenAndServe(":8080", router )
   if err != nil {
      fmt.Println(err)
   }
}
func GetExamplehandler() http.Handler {
   return http.HandlerFunc(exampleHandler)
}

func exampleHandler(w http.ResponseWriter, r *http.Request) {
   w.Write([]byte("test data"))
}

Here we are instrumenting the response times of our example endpoint api/v1/prometheus/example. We should be able to see custom response time metrics on hitting localhost:8080/metrics:

# HELP prometheus_blog_exampleEndpoint 
# TYPE prometheus_blog_exampleEndpoint summary
prometheus_blog_exampleEndpoint{quantile=”0.5"} 1.321e-05
prometheus_blog_exampleEndpoint{quantile=”0.9"} 0.000148643
prometheus_blog_exampleEndpoint{quantile=”0.99"} 0.000148643
prometheus_blog_exampleEndpoint_sum 0.00018721
prometheus_blog_exampleEndpoint_count 4


 

In this tutorial, we saw how to monitor your Go application using prometheus with some basic examples. You can extend the concepts shown here to add more things to your monitoring like error rates, response times of custom functions etc.

Comments

Popular posts from this blog

Monitoring Spring Boot API response times with Prometheus

Prometheus is a very useful tool for monitoring web applications. In this blog post, we will see how to use it to monitor Spring Boot API response times. You have to include following dependencies in your build.gradle file: compile group: 'io.prometheus', name: 'simpleclient_hotspot', version: '0.0.26' compile group: 'io.prometheus', name: 'simpleclient_servlet', version: '0.0.26' compile group: 'io.prometheus', name: 'simpleclient', version: '0.0.26' Now you will have to expose a Rest Endpoint, so that Prometheus can collect the metrics by scraping it at regular intervals. To do that, you would have to include these Java configuration classes: @Configuration @ConditionalOnClass(CollectorRegistry.class) public class Config { private static final CollectorRegistry metricRegistry = CollectorRegistry. defaultRegistry ; @Bean ServletRegistrationBean registerPrometheusExporterServlet() { retu...

Redis Pipelines and Transactions with Golang

Redis is an in memory datastore mostly used as a cache. Clients can send commands to server using TCP protocol and get the response back. So, usually a request works like this. Client sends a request to server and waits for the response to be sent back. Server processes the command and writes the response to the socket for client to read. Sometimes, in application flow we have to process multiple keys at once. In this case, for each request there will be a network overhead for the round trip between server and client. We can reduce this network overhead by sending commands to the Redis server in a batched manner and then process all the responses at once. This can be achieved using pipelines as well as transactions. Pipelining in Redis is when you send all the commands to the server at once and receive all the replies in one go. It does not provide any guarantees like no other commands will be processed between your pipelined commands. Transactions in Redis are meant to be ...