Implementation of SpringBoot project JVM monitoring on k8s based on Grafana+Prometheus

Preface

For a SpringBoot engineering application, a SpringBoot application that loses JVM monitoring is like a running wild horse. He may run well today, but tomorrow he may go crazy, and we as the master cannot control him. This is also true for the application. fatal.

When the SpringBoot project is migrated to k8s, how to monitor the resource status of the project such as CPU, memory, disk and JVM related information such as team memory, threads, class loading, etc., compared to the direct deployment of the server, in the k8s environment According to the original monitoring system, it is no longer applicable, because the application is no longer a designated server, but may also be a cluster.

What to do then? Are pods monitored one by one?

image-20200818232537090

At this time, I recalled that when building the entire k8s system, Grafana+Prometheus was used to monitor all objects in k8s.

Grafana is an open source data visualization tool developed in Go language, which can do data monitoring and statistics, with alarm function

Prometheus is an open source version of an open source monitoring system developed by SoundCloud. In 2016, the Linux Foundation (Cloud Native Computing Foundation, CNCF) initiated by Google included Prometheus as its second largest open source project (with a strong background).

The following figure shows the resource monitoring of k8s nodes, pods, and services based on Grafana+Prometheus. Through this, we can realize the hardware resource monitoring of our Springboot project.

image-20200818232722649

The implementation of the JSON file is as follows:

Realize k8s resource monitoring based on Grafana+Prometheus

Back to our topic, how to implement the JVM monitoring of the SpringBoot project on k8s. For general Java projects, we generally use jdk's own monitoring tools: jconsoleand jvisualvm, or use the command line, jstatetc.

image-20200818235246951

This can explain the running SpringBoot project JVM状态是有可读取的入口的. Is it possible to output these JVM status data through an interface, using Grafana+Prometheus as the carrier to visualize the JVM status of SpingBoot in k8s?

It must be possible. That's why i write.

image-20200819000415018

principle

Spring-Boot-Actuator

SpringBoot comes with a monitoring function Actuator, which can help monitor the internal operation of the program, such as monitoring status, bean loading, environment variables, log information, thread information, health checks, auditing, statistics, and HTTP tracking.
Actuator can also interact with Integration of external application monitoring systems, such as Prometheus. You can choose to use HTTP endpoints or JMX to manage and monitor applications.
Actuator uses Micrometer to integrate the aforementioned external application monitoring system Prometheus. This makes it possible to integrate any application monitoring system with a very small configuration.

Check the official Spring-Boot-Actuator documentation in detail

Implementation principle description

Used in the SpringBoot project, the state data such as jvm is output spring-boot-actorin httpa way, Prometheus stores the state data such as jvm by configuring the read interface data mode, and finally Grafanaconfigures the Prometheusdata source in it, designs related charts, and reads and stores it through Prometheus SQL the state data back to page rendering icon 实现运行在k8s中的springboot应用jvm可视化监控.
Insert picture description here

achieve

Environmental description

soft version
java 1.8
springboot 2.1.0.RELEASE
Spring-Boot-Actuator 2.1.0.RELEASE
grafana-amd64 v5.0.4
prometheus v2.0.0
k8s 1.16.0

Springboot introduces dependencies

        <!-- monitor -->
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-actuator</artifactId>
            <version>2.1.0.RELEASE</version>
        </dependency>
        <dependency>
            <groupId>io.micrometer</groupId>
            <artifactId>micrometer-registry-prometheus</artifactId>
            <version>1.1.0</version>
        </dependency>

application configuration

#  Actuator config
management.endpoints.web.exposure.include=*
management.metrics.tags.application=${spring.application.name}
management.metrics.export.prometheus.enabled=true
management.endpoint.health.show-details=always
management.endpoints.web.exposure.exclude=env,beans

View indicator data

accesshttp://localhost:8080/actuator/prometheus
localhost:8080: 项目访问地址

You can see the following output:

# HELP rabbitmq_published_total  
# TYPE rabbitmq_published_total counter
rabbitmq_published_total{
    
    application="center",name="rabbit",} 0.0
# HELP tomcat_global_sent_bytes_total  
# TYPE tomcat_global_sent_bytes_total counter
tomcat_global_sent_bytes_total{
    
    application="center",name="http-nio-8080",} 31.0
# HELP jvm_gc_max_data_size_bytes Max size of old generation memory pool
# TYPE jvm_gc_max_data_size_bytes gauge
jvm_gc_max_data_size_bytes{
    
    application="center",} 2.803367936E9
# HELP tomcat_threads_current_threads  
# TYPE tomcat_threads_current_threads gauge
tomcat_threads_current_threads{
    
    application="center",name="http-nio-8084",} 10.0
# HELP tomcat_sessions_active_current_sessions  
# TYPE tomcat_sessions_active_current_sessions gauge
tomcat_sessions_active_current_sessions{
    
    application="center",} 0.0
# HELP rabbitmq_channels  
# TYPE rabbitmq_channels gauge
rabbitmq_channels{
    
    application="center",name="rabbit",} 1.0
# HELP system_load_average_1m The sum of the number of runnable entities queued to available processors and the number of runnable entities running on the available processors averaged over a period of time
# TYPE system_load_average_1m gauge
system_load_average_1m{
    
    application="center",} 1.53
# HELP tomcat_sessions_rejected_sessions_total  
# TYPE tomcat_sessions_rejected_sessions_total counter
tomcat_sessions_rejected_sessions_total{
    
    application="center",} 0.0
# HELP tomcat_threads_busy_threads  
# TYPE tomcat_threads_busy_threads gauge
tomcat_threads_busy_threads{
    
    application="center",name="http-nio-8084",} 1.0
# HELP tomcat_sessions_created_sessions_total  
# TYPE tomcat_sessions_created_sessions_total counter
tomcat_sessions_created_sessions_total{
    
    application="center",} 0.0
# HELP jvm_gc_memory_promoted_bytes_total Count of positive increases in the size of the old generation memory pool before GC to after GC
# TYPE jvm_gc_memory_promoted_bytes_total counter
jvm_gc_memory_promoted_bytes_total{
    
    application="center",} 1.37307128E8
# HELP jvm_buffer_count_buffers An estimate of the number of buffers in the pool
# TYPE jvm_buffer_count_buffers gauge
jvm_buffer_count_buffers{
    
    application="center",id="direct",} 7.0
jvm_buffer_count_buffers{
    
    application="center",id="mapped",} 0.0

prometheus configuration

Add actuator to read source

  - job_name: 'center-actuator'
      metrics_path: /actuator/prometheus
      static_configs:
      - targets: ['192.168.1.240:8080']

job_name: Define the unique prometheus task name, customize
metrics_path: access path, actuator generally fix this
targets: service access entry collection

Visit prometheus

Insert picture description here

Grafana placement

springboot monitoring panel JSON

effect:
Insert picture description here

to sum up

Through the above operation, it is no longer a problem to monitor the springboot application JVM on k8s. Even in this way, the monitoring data can be persisted and automatic alarms can be realized. I think that as long as the k8s is applied, it is best to do persistent At present, the best solution is still the best solution Grafana+Prometheus. Secondly, some people may ask how that view specific thread stack, the heap memory internal situation, this is the case currently use Arthas , 基于arthas 实现线上应用的具体调优.

Guess you like

Origin blog.csdn.net/qq_28540443/article/details/108006715