Skip to main content

Monitoring Spring Boot API response times with Prometheus

Prometheus is a very useful tool for monitoring web applications. In this blog post, we will see how to use it to monitor Spring Boot API response times. You have to include following dependencies in your build.gradle file:
compile group: 'io.prometheus', name: 'simpleclient_hotspot', version: '0.0.26'
compile group: 'io.prometheus', name: 'simpleclient_servlet', version: '0.0.26'
compile group: 'io.prometheus', name: 'simpleclient', version: '0.0.26'
Now you will have to expose a Rest Endpoint, so that Prometheus can collect the metrics by scraping it at regular intervals. To do that, you would have to include these Java configuration classes:
@Configuration
@ConditionalOnClass(CollectorRegistry.class)
public class Config {
    private static final CollectorRegistry metricRegistry = CollectorRegistry.defaultRegistry;


    @Bean
    ServletRegistrationBean registerPrometheusExporterServlet() {
        return new ServletRegistrationBean(new MetricsServlet(metricRegistry), "/metrics");
    }

    @Bean
    ExporterRegister exporterRegister() {
        List<Collector> collectors = new ArrayList<>();
        collectors.add(new StandardExports());
        collectors.add(new MemoryPoolsExports());
        collectors.add(new GarbageCollectorExports());
        collectors.add(new ThreadExports());
        ExporterRegister register = new ExporterRegister(collectors);
        return register;
    }
}
ExporterRegister Class:
public class ExporterRegister {
    private List<Collector> collectors;

    public ExporterRegister(List<Collector> collectors) {
        for (Collector collector : collectors) {
            collector.register();
        }
        this.collectors = collectors;
    }

    public List<Collector> getCollectors() {
        return collectors;
    }
}
Once, you have completed these steps, you should be able to see JVM related metrics by hitting the /metrics endpoint on your application. Output should look like this:
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 75.136873
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1.519458209374E9
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 50.0
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 10240.0
# HELP http_response_time_milliseconds Request completed time in milliseconds
# TYPE http_response_time_milliseconds summary
# HELP jvm_gc_collection_seconds Time spent in a given JVM garbage collector in seconds.
# TYPE jvm_gc_collection_seconds summary
jvm_gc_collection_seconds_count{gc="PS Scavenge",} 25.0
jvm_gc_collection_seconds_sum{gc="PS Scavenge",} 0.389
jvm_gc_collection_seconds_count{gc="PS MarkSweep",} 3.0
jvm_gc_collection_seconds_sum{gc="PS MarkSweep",} 0.326
# HELP jvm_threads_current Current thread count of a JVM
# TYPE jvm_threads_current gauge
jvm_threads_current 21.0
# HELP jvm_threads_daemon Daemon thread count of a JVM
# TYPE jvm_threads_daemon gauge
jvm_threads_daemon 13.0
# HELP jvm_threads_peak Peak thread count of a JVM
# TYPE jvm_threads_peak gauge
jvm_threads_peak 21.0
# HELP jvm_threads_started_total Started thread count of a JVM
# TYPE jvm_threads_started_total counter
jvm_threads_started_total 27.0
# HELP jvm_threads_deadlocked Cycles of JVM-threads that are in deadlock waiting to acquire object monitors or ownable synchronizers
# TYPE jvm_threads_deadlocked gauge
jvm_threads_deadlocked 0.0
# HELP jvm_threads_deadlocked_monitor Cycles of JVM-threads that are in deadlock waiting to acquire object monitors
# TYPE jvm_threads_deadlocked_monitor gauge
jvm_threads_deadlocked_monitor 0.0
# HELP jvm_memory_bytes_used Used bytes of a given JVM memory area.
# TYPE jvm_memory_bytes_used gauge
jvm_memory_bytes_used{area="heap",} 4.59626208E8
jvm_memory_bytes_used{area="nonheap",} 1.06719768E8
# HELP jvm_memory_bytes_committed Committed (bytes) of a given JVM memory area.
# TYPE jvm_memory_bytes_committed gauge
jvm_memory_bytes_committed{area="heap",} 1.516765184E9
jvm_memory_bytes_committed{area="nonheap",} 1.09379584E8
# HELP jvm_memory_bytes_max Max (bytes) of a given JVM memory area.
# TYPE jvm_memory_bytes_max gauge
jvm_memory_bytes_max{area="heap",} 3.817865216E9
jvm_memory_bytes_max{area="nonheap",} -1.0
# HELP jvm_memory_pool_bytes_used Used bytes of a given JVM memory pool.
# TYPE jvm_memory_pool_bytes_used gauge
jvm_memory_pool_bytes_used{pool="Code Cache",} 2.6457856E7
jvm_memory_pool_bytes_used{pool="Metaspace",} 7.091852E7
jvm_memory_pool_bytes_used{pool="Compressed Class Space",} 9343392.0
jvm_memory_pool_bytes_used{pool="PS Eden Space",} 3.06978872E8
jvm_memory_pool_bytes_used{pool="PS Survivor Space",} 6.2361944E7
jvm_memory_pool_bytes_used{pool="PS Old Gen",} 9.0285392E7
# HELP jvm_memory_pool_bytes_committed Committed bytes of a given JVM memory pool.
# TYPE jvm_memory_pool_bytes_committed gauge
jvm_memory_pool_bytes_committed{pool="Code Cache",} 2.7590656E7
jvm_memory_pool_bytes_committed{pool="Metaspace",} 7.2220672E7
jvm_memory_pool_bytes_committed{pool="Compressed Class Space",} 9568256.0
jvm_memory_pool_bytes_committed{pool="PS Eden Space",} 1.22159104E9
jvm_memory_pool_bytes_committed{pool="PS Survivor Space",} 6.2390272E7
jvm_memory_pool_bytes_committed{pool="PS Old Gen",} 2.32783872E8
# HELP jvm_memory_pool_bytes_max Max bytes of a given JVM memory pool.
# TYPE jvm_memory_pool_bytes_max gauge
jvm_memory_pool_bytes_max{pool="Code Cache",} 2.5165824E8
jvm_memory_pool_bytes_max{pool="Metaspace",} -1.0
jvm_memory_pool_bytes_max{pool="Compressed Class Space",} 1.073741824E9
jvm_memory_pool_bytes_max{pool="PS Eden Space",} 1.29236992E9
jvm_memory_pool_bytes_max{pool="PS Survivor Space",} 6.2390272E7
jvm_memory_pool_bytes_max{pool="PS Old Gen",} 2.863661056E9
Now, to monitor our API response times, you have to write an Interceptor, which will gather and update the metrics for each call to a particular API endpoint. You can specify different quintiles to be monitored along with error tolerance limit for each of them.
@Component
public class PrometheusRequestTimerInterceptor extends HandlerInterceptorAdapter{

    private static final String REQ_PARAM_TIMING = "timing";

    private static final Summary responseTimeInMs = Summary
            .build()
            .name("http_response_time_milliseconds")
            .labelNames("method", "handler", "status")
            .help("Request completed time in milliseconds")
            .maxAgeSeconds(120)
            .ageBuckets(2)
            .quantile(0.5, 0.05)
            .quantile(0.95, 0.05)
            .quantile(0.99, 0.05)
            .register();

    @Override
    public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception {
        request.setAttribute(REQ_PARAM_TIMING, System.currentTimeMillis());
        return true;
    }

    @Override
    public void afterCompletion(HttpServletRequest request, HttpServletResponse response, Object handler, Exception ex)
            throws Exception {
        Long timingAttr = (Long) request.getAttribute(REQ_PARAM_TIMING);
        long requestTime = System.currentTimeMillis() - timingAttr;
        String handlerLabel = handler.toString();
        // get short form of handler method name
        if (handler instanceof HandlerMethod) {
            Method method = ((HandlerMethod) handler).getMethod();
            handlerLabel = method.getDeclaringClass().getSimpleName() + "." + method.getName();
        }
        responseTimeInMs.labels(request.getMethod(), handlerLabel, Integer.toString(response.getStatus())).observe(
                requestTime);
    }

}
Once you have implemented your interceptor, you have to register it so that it can intercept all the API calls made to the application:
@Configuration
public class InterceptorConfig extends WebMvcConfigurerAdapter {

    @Autowired
    private PrometheusRequestTimerInterceptor prometheusRequestTimerInterceptor;

    @Override
    public void addInterceptors(InterceptorRegistry registry) {
        registry.addInterceptor(prometheusRequestTimerInterceptor);
    }
}
With this, you should be able to see output for API endpoints in your application by hitting the /metrics endpoint:
http_response_time_milliseconds{method="GET",handler="ConfigController.YourController",status="200",quantile="0.5",} 123.0
http_response_time_milliseconds{method="GET",handler="ConfigController.YourController",status="200",quantile="0.95",} 133.0
http_response_time_milliseconds{method="GET",handler="ConfigController.YourController",status="200",quantile="0.99",} 133.0
http_response_time_milliseconds_count{method="GET",handler="ConfigController.YourController",status="200",} 100.0
http_response_time_milliseconds_sum{method="GET",handler="ConfigController.YourController",status="200",} 10123.0
I write on various programming topics here.

Comments

Post a Comment

Popular posts from this blog

Redis Pipelines and Transactions with Golang

Redis is an in memory datastore mostly used as a cache. Clients can send commands to server using TCP protocol and get the response back. So, usually a request works like this. Client sends a request to server and waits for the response to be sent back. Server processes the command and writes the response to the socket for client to read. Sometimes, in application flow we have to process multiple keys at once. In this case, for each request there will be a network overhead for the round trip between server and client. We can reduce this network overhead by sending commands to the Redis server in a batched manner and then process all the responses at once. This can be achieved using pipelines as well as transactions. Pipelining in Redis is when you send all the commands to the server at once and receive all the replies in one go. It does not provide any guarantees like no other commands will be processed between your pipelined commands. Transactions in Redis are meant to be ...

Monitoring Go micro services using Prometheus

In this age of web scale architecture, Golang has become the language of choice for many developers to implement high throughput micro services. One of the key components of running and maintaining these services is to be able to measure the performance. Prometheus is a time series based database which has become very popular for monitoring micro services. In this blog, we will see how to implement monitoring for your Go micro service using prometheus. We will be using the official Prometheus library github.com/prometheus/client_golang/prometheus/promhttp to expose the go metrics. You can use Promhttp library’s HTTP handler as the handler function to expose the metrics. package main import ( "github.com/gorilla/mux" "github.com/prometheus/client_golang/prometheus/promhttp" "net/http" ) func main() { router := mux.NewRouter() router.Handle( "/metrics" , promhttp.Handler()) http.ListenAndServe( ":8080" , router ) }...