Micro-service monitoring and alarm (two) -Prometheus Introduction and environmental structures

1, Prometheus Introduction

  Prometheus is an active ecosystem of open source systems monitoring and alerting tool kit. FIG Prometheus is the architecture components and certain ecosystems. The most central location is Prometheus server, the main role is to go according to our configuration for collecting and storing time-series data. Service discovery Service discovery, through Service discovery, Prometheus server will know where to gather data, there are two ways, one is static, through the file to go with; the other is dynamic and can be configured by zookeeper or other center, when the data changes when inside, go to different places to fetch data. Jobs / exporters, our application is generally provided for Prometheus server to fetch data, here is the pull model, the benefits are for our application, the service does not need to know where Prometheus, only storm drain our data on it. Pushgateway, is used to support push mode, because sometimes, some of our data is not always present, such as data timing task, we have a short live data pushed to Pushgateway, for Prometheus server pulls data from Pushgateway. Here to introduce a component data acquisition is completed. After the data are collected to put the Prometheus server, the HTTP server through the data for some of the violence spilled application front end to query the use by PromQL, visualization and exporting of data, the component recommended Grafana. Alertmanager do alarm, there are many ways alarm, email, micro-channel, nails or write your own interface, you can customize some of the rules of time-series data in the Prometheus server, set out rules are pushed to Alertmanager, but it does not alarm immediately, but we will evaluate several times, to prevent false alarms.

2, Prometheus environment to build

2.1, using a docker Prometheus installation file structure

  2.1.1, Docker-compose.yml

version: "3"
services:
  prometheus:
    image: prom/prometheus:v2.4.3
    container_name: 'prometheus'
    volumes:
    - ./prometheus/:/etc/prometheus/
    ports:
    - '8999:9090'

  2.1.2, prometheus.yml

# Global configuration 
, Ltd. Free Join: 
  # how often to pull a data 
  scrape_interval: 15s 

# to pull goal is where 
scrape_configs: 
# our springboot project
 - job_name: 'springboot-App' 
  # interval 10s pulling data once, covering the global configuration 
  scrape_interval: 10s 
  # request path 
  metrics_path: '/ Actuator / Prometheus' 

  static_configs: 
  # where to crawl, because our project to run in the local computer, the configuration of the host running docker
   - targets: [' host.docker.internal: 9080 ' ] 
    # to fetch the data, add a tag 
    labels: 
      file application: ' springboot-App ' 

monitoring #prometheus the machine
 - job_name:' Prometheus' 

  scrape_interval: 5S 

  static_configs:
   - Targets: [ 'localhost: 9090']

  2.1.3, start the command line, enter the directory monitoring, implementation of docker-compose -f docker-compose.yml up command

   2.1.4, visit http://127.0.0.1:8999/  can visit the vessel deployed prometheus, by Status-> Targets can see two of our configured data acquisition target. Endpoint, on behalf of data endpoints, State represents the current state, Labels on behalf of the label, Last Scrape representatives got me from the last time, Error on behalf of the error message.

3, SpringBoot integration Prometheus, we have to order service, for example

3.1, adding SpringBoot Actuator monitors endpoints rely

        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-actuator</artifactId>
        </dependency>

3.2, was added micrometer-registry-prometheus dependent in increase in actuator end prometheus

        <dependency>
            <groupId>io.micrometer</groupId>
            <artifactId>micrometer-registry-prometheus</artifactId>
        </dependency>

3.3、application.yml配置对外暴漏端点,这里我们控制只暴露三个

3.4、资源服务配置端点请求,不用身份验证

package cn.caofanqi.security.config;

import org.springframework.boot.actuate.autoconfigure.security.servlet.EndpointRequest;
import org.springframework.context.annotation.Configuration;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.oauth2.config.annotation.web.configuration.ResourceServerConfigurerAdapter;

/**
 * 资源服务器配置
 *
 * @author caofanqi
 * @date 2020/2/14 14:07
 */
@Configuration
public class ResourceServerConfig extends ResourceServerConfigurerAdapter {


    @Override
    public void configure(HttpSecurity http) throws Exception {
        http.authorizeRequests().requestMatchers(EndpointRequest.toAnyEndpoint()).permitAll()
                .anyRequest().authenticated();
    }
}

3.5、启动order服务,刷新http://127.0.0.1:8999/targets 页面,可以看到我们配置的端点,状态为up了

 3.6、我们可以通过http://order.caofanqi.cn:9080/actuator/prometheus,看到服务为Prometheus提供的数据,都是一个数据名称跟着一个数字,有的数据名称带{},里面是这个数据名称的标签。

 3.7、我们可以通过Prometheus的Graph来查看这些数据,这里{}中的标签,比我们项目中的多,那是因为它把prometheus.yml配置文件中一些配置也添加成标签了,job_name -> job、static_configs.labels.application -> application、static_configs.targets -> instance ,有利于我们对数据进行过滤。

 3.8、还可以通过标签进行过滤、通过Graph看图分析

但是它自己提供的这个界面属实不太好看,下节我们用grafana来代替

 

 项目源码:https://github.com/caofanqi/study-security/tree/dev-prometheus1

Guess you like

Origin www.cnblogs.com/caofanqi/p/12307635.html