二十七、prometheus部署安装配置告警

1.部署前简介

本次监控部署应用到的相关软件如下
prometheus			数据采集和存储 提供PromQL语法查询
alertmanager		警告管理 进行报警
node_exporter		收集主机的基本性能监控指标
blackbox_exporter	收集http,https,tcp等监控指标
redis_exporter		收集redis相关的监控指标
mysqld_exporter		收集mysql相关的监控指标
pushgateway			向prometheus推送监控指标
PrometheusAlert		运维告警转发系统 结合alertmanager
grafana				监控数据大盘展示
部署规划
主机prometheus(172.19.120.164) 	部署 prometheus alertmanager PrometheusAlert grafana 这四个服务部署在了一台上 也可以分开单独部署
主机game(172.19.120.4)			部署 node_exporter redis_exporter pushgateway mysqld_exporter
blackbox_exporter 本次也部署在了主机prometheus上
各服务端口说明
prometheus 			启动 9090 端口
alertmanager 		启动 9093 端口
PrometheusAlert		启动 8080 端口
grafana				启动 3000 端口
blackbox_exporter 	启动 9115 端口
node_exporter		启动 9100 端口
redis_exporter		启动 9121 端口
mysqld_exporter		启动 9104 端口
pushgateway			启动 9091 端口

prometheus alertmanager blackbox_exporter node_exporter redis_exporter pushgateway mysqld_exporter 其启动端口都可以通过启动命令 -h 查看帮助信息 找到其指定启动端口的参数
PrometheusAler 的启动端口在 配置文件中设置

2.部署服务

主机prometheus上部署服务
prometheus安装
软件包地址 https://github.com/prometheus/prometheus/releases

tar xvf prometheus-2.20.1.linux-amd64.tar.gz
mv prometheus-2.20.1.linux-amd64 /usr/local/prometheus

cat > /usr/lib/systemd/system/prometheus.service << EOF
[Unit]
Description=prometheus
After=network.target 
[Service]
Restart=on-failure
WorkingDirectory=/usr/local/prometheus
ExecStart=/usr/local/prometheus/prometheus --web.enable-lifecycle --storage.tsdb.retention.time=90d --web.enable-admin-api --storage.tsdb.path=/usr/local/prometheus/data --web.external-url=http://prometheus-dd.aaaa.com
[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload 
systemctl enable prometheus.service

启动参数相关说明:
--web.enable-lifecycle							在修改了prometheus.yml之后 可以通过下面的方式 进行热加载不需要通过重启
--storage.tsdb.retention.time=90d				设置数据保留时间为90天
--web.enable-admin-api							启用api 可以进行数据清理功能
--storage.tsdb.path=/usr/local/prometheus/data	指定数据落地的目录
--web.external-url=http://prometheus-dd.aaaa.com			指定域名 此域名用于报警消息中超链接跳转所用

当修改了prometheus.yml文件后 热加载配置命令
curl -X POST http://localhost:9090/-/reload
alertmanager安装
软件包地址 https://github.com/prometheus/alertmanager/releases

tar xvf alertmanager-0.21.0.linux-amd64.tar.gz
mv alertmanager-0.21.0.linux-amd64 /usr/local/alertmanager

cat > /usr/lib/systemd/system/alertmanager.service << EOF
[Unit]
Description=alertmanager
After=network.target 
[Service]
Restart=on-failure
WorkingDirectory=/usr/local/alertmanager
ExecStart=/usr/local/alertmanager/alertmanager --web.external-url=http://alertmanager-dd.aaaa.com
[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload 
systemctl enable alertmanager.service

启动参数相关说明:
--web.external-url=http://alertmanager-dd.aaaa.com 指定域名 此域名用于报警消息中超链接跳转所用
PrometheusAlert安装
软件包地址 https://github.com/feiyu563/PrometheusAlert
此软件安装和使用文档 https://feiyu563.gitbook.io/prometheusalert/

git clone https://github.com/feiyu563/PrometheusAlert.git
mv PrometheusAlert /usr/local/
chmod 755 /usr/local/PrometheusAlert/example/linux/PrometheusAlert

cat > /usr/lib/systemd/system/PrometheusAlert.service << EOF
[Unit]
Description=PrometheusAlert
After=network.target
[Service]
Restart=on-failure
WorkingDirectory=/usr/local/PrometheusAlert
ExecStart=/usr/local/PrometheusAlert/example/linux/PrometheusAlert
[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload 
systemctl enable PrometheusAlert.service
grafana安装
软件包地址 https://grafana.com/grafana/download

wget https://dl.grafana.com/oss/release/grafana-7.1.5-1.x86_64.rpm
yum install grafana-7.1.5-1.x86_64.rpm
systemctl daemon-reload
systemctl enable grafana-server.service
systemctl start grafana-server.service
blackbox_exporter安装
软件包地址 https://github.com/prometheus/blackbox_exporter/releases

tar xvf blackbox_exporter-0.17.0.linux-amd64.tar.gz
mv blackbox_exporter-0.17.0.linux-amd64 /usr/local/blackbox_exporter

cat > /usr/lib/systemd/system/blackbox_exporter.service << EOF
[Unit]
Description=blackbox_exporter
After=network.target 
[Service]
Restart=on-failure
WorkingDirectory=/usr/local/blackbox_exporter
ExecStart=/usr/local/blackbox_exporter/blackbox_exporter
[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload 
systemctl enable blackbox_exporter.service
systemctl start blackbox_exporter.service
使用nginx访问对应服务
由于 prometheus alertmanager PrometheusAlert 服务没有用户系统 所以我们使用nginx来做认证

nginx的配置文件内容如下

grafana.aaaa.com 跳转到grafana的3000端口 无需nginx认证 grafana自带认证 初始账户和密码 Admin Admin
prometheus.aaaa.com 跳转到prometheus的9090端口 使用nginx认证
prometheus-dd.aaaa.com 跳转到prometheus的9090端口 无需nginx认证 此域名与启动服务参数里的域名一致 但设置了转发规则 通过钉钉APP访问无需认证直接 否则跳转到 prometheus.aaaa.com 域名下
alert.aaaa.com 跳转到PrometheusAlert的8080端口 使用nginx认证
alertmanager.aaaa.com 跳转到alertmanager的9093端口 使用nginx认证
alertmanager-dd.aaaa.com 跳转到alertmanager的9093端口 无需nginx认证 此域名与启动服务参数里的域名一致 但设置了转发规则 通过钉钉APP访问无需认证直接 否则跳转到 alertmanager.aaaa.com 域名下

可以看到prometheus-dd.aaaa.com alertmanager-dd.aaaa.com为服务启动参数 --web.external-url指定的url 
这2个域名在nginx配置文件中设置了转发限制 在使用钉钉APP访问的时候没有nginx认证 其他方式的访问都会跳转到需要认证的域名下

yum -y install httpd-tools
htpasswd -bc /usr/local/nginx/prometheus.passwd 用户名 密码

server {
    listen  80;
    server_name grafana.aaaa.com;
    location / {
        proxy_pass http://172.19.120.164:3000;
        rewrite ^/grafana/(.*) /$1 break;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
    access_log  /usr/local/nginx/logs/grafana.log main;
}

server {
    listen  80;
    server_name  prometheus.aaaa.com;
    location / {
        auth_basic "Prometheus Auth";
        auth_basic_user_file /usr/local/nginx/prometheus.passwd;
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_pass http://172.19.120.164:9090;
    }
    access_log  /usr/local/nginx/logs/prometheus.log main;
}

server {
    listen  80;
    server_name  prometheus-dd.aaaa.com;
    location / {
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $remote_addr;
        if ($http_user_agent ~ "com.laiwang.DingTalk")
        #if ($remote_addr ~ "139.196.8.74")
        {
            proxy_pass http://172.19.120.164:9090;
            break;
        }
        rewrite ^/(.*) http://prometheus.aaaa.com/$1 permanent;
    }
    access_log  /usr/local/nginx/logs/prometheus-dd.log main;
}

server {
    listen  80;
    server_name  alert.aaaa.com;
    location / {
        auth_basic "Alert Auth";
        auth_basic_user_file /usr/local/nginx/prometheus.passwd;
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_pass http://172.19.120.164:8080;
    }
    access_log  /usr/local/nginx/logs/alert.log main;
}

server {
    listen  80;
    server_name  alertmanager.aaaa.com;
    location / {
        auth_basic "Alert Auth";
        auth_basic_user_file /usr/local/nginx/prometheus.passwd;
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_pass http://172.19.120.164:9093;
    }
    access_log  /usr/local/nginx/logs/alertmanager.log main;
}

server {
    listen  80;
    server_name  alertmanager-dd.aaaa.com;
    location / {
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $remote_addr;
        if ($http_user_agent ~ "com.laiwang.DingTalk")
        #if ($remote_addr ~ "139.196.8.74")
        {
            proxy_pass http://172.19.120.164:9093;
            break;
        }
        rewrite ^/(.*) http://alertmanager.aaaa.com/$1 permanent;
    }
    access_log  /usr/local/nginx/logs/alertmanager-dd.log main;
}

主机game上部署服务

node_exporter安装
软件包地址 https://github.com/prometheus/node_exporter/releases

tar xvf node_exporter-1.0.1.linux-amd64.tar.gz 
mv node_exporter-1.0.1.linux-amd64 /usr/local/node_exporter

cat > /usr/lib/systemd/system/node_exporter.service << EOF
[Unit]
Description=node_exporter
[Service]
Restart=on-failure
WorkingDirectory=/usr/local/node_exporter
ExecStart=/usr/local/node_exporter/node_exporter
[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload 
systemctl enable node_exporter.service
systemctl start node_exporter.service 
redis_exporter安装
软件包地址 https://github.com/oliver006/redis_exporter/releases

建议每台机器上部署一个redis_exporter来监控此服务器上的redis实例
作者的建议是一个redis实例使用一个redis_exporter

tar xvf redis_exporter-v1.11.1.linux-amd64.tar.gz
mv redis_exporter-v1.11.1.linux-amd64 /usr/local/redis_exporter

cat > /usr/lib/systemd/system/redis_exporter.service << EOF
[Unit]
Description=redis_exporter
[Service]
Restart=on-failure
WorkingDirectory=/usr/local/redis_exporter
ExecStart=/usr/local/redis_exporter/redis_exporter
[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload 
systemctl enable redis_exporter.service
systemctl start redis_exporter.service 
mysqld_exporter安装
软件包地址 https://github.com/prometheus/mysqld_exporter/releases

tar xvf mysqld_exporter-0.12.1.linux-amd64.tar.gz
mv mysqld_exporter-0.12.1.linux-amd64 /usr/local/mysqld_exporter

cat > /usr/lib/systemd/system/mysqld_exporter.service << EOF
[Unit]
Description=mysqld_exporter
After=network.target

[Service]
Restart=on-failure
Type=simple
Environment=DATA_SOURCE_NAME=root:password@(IP:3306)/
ExecStart=/usr/local/mysqld_exporter/mysqld_exporter
[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload 
systemctl enable mysqld_exporter.service
systemctl start mysqld_exporter.service 

Environment=DATA_SOURCE_NAME=root:password@(IP:3306)/ 
root		是数据库账户
password	是密码
IP:3306 	数据库地址
pushgateway安装
软件包地址 https://github.com/prometheus/pushgateway/releases

tar xvf pushgateway-1.2.0.linux-amd64.tar.gz
mv pushgateway-1.2.0.linux-amd64 /usr/local/pushgateway

cat > /usr/lib/systemd/system/pushgateway.service << EOF
[Unit]
Description=pushgateway
After=network.target 

[Service]
Restart=on-failure
WorkingDirectory=/usr/local/pushgateway
ExecStart=/usr/local/pushgateway/pushgateway
[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload 
systemctl enable pushgateway.service
systemctl start pushgateway.service 

3.修改配置启动服务

prometheus alertmanager PrometheusAlert 安装的时候只是设置了开机自启 并未启动 其他服务都已经启动
修改PrometheusAlert配置文件启动服务
配置文件 /usr/local/PrometheusAlert/conf/app.conf  如果没有就 cp app-example.conf app.conf

修改app.conf的配置  文件配置参数说明很详细 
prometheus_cst_time=1	#设置为1 
open-dingding=1			#开启钉钉告警
ddurl=https://oapi.dingtalk.com/robot/send?access_token=****** #钉钉机器人地址

获取钉钉机器人地址的方法文档
https://feiyu563.gitbook.io/prometheusalert/gao-jing-jie-shou-mu-biao-pei-zhi/ding-ding-gao-jing-pei-zhi
可以先拉取一个钉钉群 群设置里面创建钉钉机器人 在将其他人移除 只保留自己 这样在调试的时候可以不打扰其他人

启动服务
systemctl start PrometheusAlert.service

登录到PrometheusAlert的后台 可以通过之前设置的域名 也可以直接 IP:8080 访问
Test 菜单栏下有一个 钉钉 告警测试 来测试钉钉告警是否配置成功 (钉钉群会收到一则测试告警信息)

AlertTemplate 菜单栏下 修改 prometheus-dd 模板并保存

{
   
   { $var := .externalURL}}{
   
   { range $k,$v:=.alerts }}
{
   
   {if eq $v.status "resolved"}}
### [Prometheus恢复信息]({
   
   {$v.generatorURL}})
#### [{
   
   {$v.labels.alertname}}]({
   
   {$var}})
##### 告警项目:{
   
   {$v.annotations.project}}
##### 开始时间:{
   
   {GetCSTtime $v.startsAt}}
##### 结束时间:{
   
   {GetCSTtime $v.endsAt}}
##### 恢复主机:{
   
   {$v.labels.instance}}
{
   
   {else}}
### [Prometheus告警信息]({
   
   {$v.generatorURL}})
#### [{
   
   {$v.labels.alertname}}]({
   
   {$var}})
##### 告警项目:{
   
   {$v.annotations.project}}
##### 开始时间:{
   
   {GetCSTtime $v.startsAt}}
##### 故障主机:{
   
   {$v.labels.instance}}
##### 告警描述:{
   
   {$v.annotations.description}}
{
   
   {end}}
{
   
   { end }}

告警信息展示如下:

Prometheus告警信息
load5负载高于cpu核数的1.5倍
告警项目:捕鱼avid
开始时间:2020-10-25 00:36:31
故障主机:game3-avid:9100
告警描述:load5 高于 CPU核数的1.5倍 当前值为 12.21

Prometheus恢复信息
load5负载高于cpu核数的1.5倍
告警项目:捕鱼avid
开始时间:2020-10-25 00:36:31
结束时间:2020-10-25 00:45:01
恢复主机:game3-avid:9100

其中 Prometheus告警信息 和 load5负载高于cpu核数的1.5倍 都为蓝色字体
点击 Prometheus告警信息 会跳转到 prometheus-dd.aaaa.com 域名下 显示该告警项的最新指标
点击 load5负载高于cpu核数的1.5倍 会跳转到 alertmanager-dd.aaaa.com 域名下 可以对报警设置静默
修改alertmanager配置文件启动服务
配置文件 /usr/local/alertmanager/alertmanager.yml

其内容如下
global:
  resolve_timeout: 5m
route:
  group_by: ['alertname']
  group_wait: 10s
  group_interval: 10s
  repeat_interval: 1h
  receiver: 'web.hook'
receivers:
- name: 'web.hook'
  webhook_configs:
  - url: 'http://172.19.120.164:8080/prometheusalert?type=dd&tpl=prometheus-dd&ddurl=https://oapi.dingtalk.com/robot/send?access_token=******'

systemctl start alertmanager.service

alertmanager有更高级的用法可以根据告警的严重级别和告警的项目类型 发送给对应的receiver
可以参考 https://feiyu563.gitbook.io/prometheusalert/prometheusalert-gao-jing-yuan-pei-zhi/prometheus-pei-zhi
修改prometheus配置文件启动服务
目录简介
/usr/local/prometheus/下目录简介
app_node_config 目录配置 node_exporter 监控的主机信息
app_redis_config 目录配置 redis_exporter 监控的主机信息
app_blackbox_config 目录配置 blackbox_exporter 监控的主机信息
rule 目录配置 报警规则
data 储存监控数据

app_node_config目录下又根据主机的类型分为了不同的目录 为了在grafana展示主机信息的时候可以通过主机分组进行选择 避免主机下拉框列过长
app_node_config/
├── game
│   ├── game1-APP.yml
├── gate
│   ├── gate1-APP.yml
├── other
│   ├── admin-APP.yml
└── redis
    ├── redis1-APP.yml
  
其他config目录也会根据需求在进行划分目录
prometheus.yml配置文件
配置文件 /usr/local/prometheus/prometheus.yml
prometheus.yml的配置文件范例

# my global config
global:
  scrape_interval:     15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

# Alertmanager configuration
alerting:
  alertmanagers:
  - static_configs:
    - targets:
      - 172.19.120.164:9093

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  - "/usr/local/prometheus/rule/APP/*.yml"
  - "/usr/local/prometheus/rule/H5/*.yml"

scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'node_prometheus'

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
    - targets: ['localhost:9090']

  - job_name: 'node_APP_game'
    file_sd_configs:
    - refresh_interval: 15s
      files:
      - "/usr/local/prometheus/app_node_config/game/*.yml"

  - job_name: 'redis_exporter_redis1-APP'
    file_sd_configs:
    - refresh_interval: 15s
      files:
      - "/usr/local/prometheus/app_redis_config/redis1-APP.yml"
    metrics_path: /scrape
    relabel_configs:
      - source_labels: [__address__]
        target_label: __param_target
      - source_labels: [__param_target]
        target_label: instance
      - target_label: __address__
        replacement: 172.19.120.4:9121
    
  - job_name: 'blackbox_APP_V1'
    metrics_path: /probe
    params:
      module: [tcp_connect]
    file_sd_configs:
    - refresh_interval: 15s
      files:
      - "/usr/local/prometheus/app_blackbox_config/blackbox_V1*.yml"
    relabel_configs:
      - source_labels: [__address__]
        target_label: __param_target
      - source_labels: [__param_target]
        target_label: instance
      - target_label: __address__
        replacement: 172.19.120.164:9115
  - job_name: 'mysql_zabbix'
    static_configs:
      - targets: ['manager-APP:9104']
配置文件简单说明:
根据不同的监控项 定义了不同的job_name
每个job 都使用了 file_sd_configs 功能 自动发现配置文件 检测间隔为15秒
    file_sd_configs:
    - refresh_interval: 15s
      files:
      - "/usr/local/prometheus/app_node_config/game/*.yml"
只要我们在对应的目录下添加了新的yml 可以不重启prometheus即可自动发现主机

如果不使用自动发现这样写
    static_configs:
      - targets:

redis_exporter_redis1-APP job里replacement 配置的为安装redis_exporter的主机 IP:port
blackbox_APP_V1 job里replacement 配置的为安装blackbox_exporter的主机 IP:port

blackbox_APP_V1 使用的是blackbox_exporter的tcp_connect模块 只是检测tcp端口是否打开
blackbox_expotrer还支持http_2xx,http_post_2xx,icmp,pop3s_banner,ssh_banner,irc_banner等其用法自行研究
比较常用的是 http_2xx 来检测网站的访问状态 
这是官网的配置
  - job_name: 'blackbox'
    metrics_path: /probe
    params:
      module: [http_2xx]  # Look for a HTTP 200 response.
    static_configs:
      - targets:
        - http://prometheus.io    # Target to probe with http.
        - https://prometheus.io   # Target to probe with https.
        - http://example.com:8080 # Target to probe with http on port 8080.
    relabel_configs:
      - source_labels: [__address__]
        target_label: __param_target
      - source_labels: [__param_target]
        target_label: instance
      - target_label: __address__
        replacement: 127.0.0.1:9115  # The blackbox exporter's real hostname:port.

ping icmp的写法
  - job_name: 'blackbox_icmp_APP_gateqq_to_game'
    metrics_path: /probe
    params:
      module: [icmp]
    static_configs:
      - targets: ['IP1','IP2','IP3']
    relabel_configs:
      - source_labels: [__address__]
        target_label: __param_target
      - source_labels: [__param_target]
        target_label: instance
      - target_label: __address__
        replacement: IP:9115
job里的yml写法
每个job监控主机的yml写法
cat /usr/local/prometheus/app_node_config/game/game1-APP.yml 
- targets: ['game1-APP:9100']
若一个yml包含多个主机
- targets: ['game1-APP:9100','game2-APP:9100']

cat /usr/local/prometheus/app_redis_config/redis1-APP.yml 
- targets:
  - redis://172.19.120.4:7000
若一个yml包含多个redis实例
- targets:
  - redis://172.19.120.4:7000
  - redis://172.19.120.4:7001
  
cat /usr/local/prometheus/app_blackbox_config/blackbox_V1_game.yml 
- targets:
  - game1-APP:9001
若一个yml包含多个主机的端口
- targets:
  - game1-APP:9001
  - game2-APP:9001
 
我们的yml文件里面的targets大部分都使用了hostname的方式没有使用ip 
使用hostname 需要在/etc/hosts 绑定主机名和IP

由于prometheus抓取的监控信息instance使用的都是targets设置的内容 如果使用ip标识 在报警信息中 显示ip不便于识别 所以使用了此种方法

通过PromQL语句查询监控指标
node_load1 会打印每个被监控主机的load1值
返回的结果如下 
node_load1{instance="game1-APP:9100",job="node_APP_game"}			0.1

node_load1 * on(instance) group_left(nodename) (node_uname_info) 打印的每个被监控主机的load1值
{instance="game1-APP:9100",job="node_APP_game",nodename="game01"} 	0.2
此语句增加了nodename字段是主机名 但是这是node_expoter获取的监控指标
在查询redis_exporter获取的监控指标的时候 就不能使用 * on(instance) group_left(nodename) (node_uname_info)来添加nodename字段了 可能是不支持跨表查询吧 目前我是没有查到如何写

所以目前我是通过绑定host和定义不同的job_name 在报警信息中  通过instance和job来快速识别主机
rule下yml写法
blackbox_tcp_rule.yml  node_rules.yml  redis_rules.yml
根据不同的 exporter 使用了不同的yml
cat /usr/local/prometheus/rule/APP/node_rules.yml 
groups:
  - name: node_rule
    rules:
      - alert: 实例node进程状态
        expr: up{job=~"^node_APP.*"} == 0
        for: 1m
        annotations:
          project: 项目
          summary: "{
   
   {$labels.instance}}:Targets is down"
          description: "{
   
   {$labels.job}} 已经 down."
      - alert: 内存使用率
        expr: ceil((node_memory_MemTotal_bytes{job=~"^node_APP_(gate|game|logic|other)"} - (node_memory_MemFree_bytes{job=~"^node_APP_(gate|game|logic|other)"}+node_memory_Buffers_bytes{job=~"^node_APP_(gate|game|logic|other)"}+node_memory_Cached_bytes{job=~"^node_APP_(gate|game|logic|other)"} )) / node_memory_MemTotal_bytes{job=~"^node_APP_(gate|game|logic|other)"} * 100) > 90
        for: 2m
        annotations:
          project: 项目
          summary: "{
   
   {$labels.instance}}:内存使用率超过85%"
          description: "内存使用率过高 当前值为 {
   
   {$value}}%"
      - alert: redis内存使用率
        expr: ceil((node_memory_MemTotal_bytes{job="node_APP_redis"} - (node_memory_MemFree_bytes{job="node_APP_redis"}+node_memory_Buffers_bytes{job="node_APP_redis"}+node_memory_Cached_bytes{job="node_APP_redis"} )) / node_memory_MemTotal_bytes{job="node_APP_redis"} * 100) > 60
        for: 2m
        annotations:
          project: 项目
          summary: "{
   
   {$labels.instance}}:内存使用率超过60%"
          description: "内存使用率过高 当前值为 {
   
   {$value}}%"
      - alert: 剩余内存
        expr: node_memory_MemAvailable_bytes{job=~"^node_APP_(gate|game|logic|other)"} / (1024*1024*1024)  < 1
        for: 2m
        annotations:
          project: 项目
          summary: "{
   
   {$labels.instance}}:剩余内存小于1G"
          description: "剩余内存过小 当前值为 {
   
   {$value}}G "
      - alert: redis剩余内存
        expr: node_memory_MemAvailable_bytes{job="node_APP_redis"} / (1024*1024*1024)  < 10
        for: 2m
        annotations:
          project: 项目
          summary: "{
   
   {$labels.instance}}:剩余内存小于10G"
          description: "剩余内存过小 当前值为 {
   
   {$value}}G "
      - alert: gate load5负载高于cpu核数的2倍
        expr: sum by (instance) (node_load5{job=~"^node_APP_gate.*"}) > count by(instance) (count by(instance, cpu) (node_cpu_seconds_total{job=~"^node_APP_gate.*",mode="system"})) * 2
        for: 2m
        annotations:
          project: 项目
          summary: "{
   
   {$labels.instance}}:gate load5 高于 CPU核数的2倍"
          description: "gate load5 高于 CPU核数的2倍 当前值为 {
   
   {$value}}"
      - alert: load5负载高于cpu核数的1.5倍
        expr: sum by (instance) (node_load5{job=~"^node_APP_(game|logic|other)"}) > count by(instance) (count by(instance, cpu) (node_cpu_seconds_total{job=~"^node_APP_(game|logic|other)",mode="system"})) * 1.5
        for: 2m
        annotations:
          project: 项目
          summary: "{
   
   {$labels.instance}}:load5 高于 CPU核数的1.5倍"
          description: "load5 高于 CPU核数的1.5倍 当前值为 {
   
   {$value}}"
      - alert: load1负载高于cpu核数的3倍
        expr: sum by (instance) (node_load1{job=~"^node_APP_(gate|game|logic|other)"}) > count by(instance) (count by(instance, cpu) (node_cpu_seconds_total{job=~"^node_APP_(gate|game|logic|other)",mode="system"})) * 3
        for: 2m
        annotations:
          project: 项目
          summary: "{
   
   {$labels.instance}}:load1 高于 CPU核数的3倍"
          description: "load1 高于 CPU核数的3倍 当前值为 {
   
   {$value}}"
      - alert: redis load1
        expr: node_load1{job="node_APP_redis"}  > 6
        for: 2m
        annotations:
          project: 项目
          summary: "{
   
   {$labels.instance}}:load1 大于 6"
          description: "load1 大于 6 当前值为 {
   
   {$value}}"
      - alert: TCP ESTABLISHED 连接数
        expr: node_netstat_Tcp_CurrEstab{job=~"^node_APP_(gate|game|logic|other)"} > 8000
        for: 2m
        annotations:
          project: 项目
          summary: "{
   
   {$labels.instance}}:TCP ESTABLISHED 连接数大于8000"
          description: "TCP ESTABLISHED 连接数大于8000 当前值为 {
   
   {$value}}"
      - alert: redis TCP ESTABLISHED 连接数
        expr: node_netstat_Tcp_CurrEstab{job="node_APP_redis",instance=~"^redis.*"} > 10000
        for: 2m
        annotations:
          project: 项目
          summary: "{
   
   {$labels.instance}}:TCP ESTABLISHED 连接数大于10000"
          description: "TCP ESTABLISHED 连接数大于10000 当前值为 {
   
   {$value}}"
      - alert: TCP TIME_WAIT 连接数
        expr: node_sockstat_TCP_tw{job=~"^node_APP_(gate|game|logic|other|redis)"} > 5000
        for: 2m
        annotations:
          project: 项目
          summary: "{
   
   {$labels.instance}}:TCP TIME_WAIT 连接数大于5000"
          description: "TCP TIME_WAIT 连接数大于5000 当前值为 {
   
   {$value}}"
      - alert: 磁盘空间使用率
        expr: ceil((1-(node_filesystem_avail_bytes{job=~"node_APP.*"} / node_filesystem_size_bytes{job=~"node_APP.*"}))*100) > 85
        for: 5m
        annotations:
          project: 项目
          summary: "{
   
   {$labels.instance}}:磁盘空间使用率大于85"
          description: "目录 {
   
   {$labels.mountpoint}} 磁盘使用率过高 当前值为 {
   
   {$value}}%"
      - alert: 磁盘inodes使用率
        expr: ceil((1-(node_filesystem_files_free{job=~"^node_APP.*"} / node_filesystem_files{job=~"^node_APP.*"}))*100) > 80
        for: 5m
        annotations:
          project: 项目
          summary: "{
   
   {$labels.instance}}:磁盘inodes使用率"
          description: "目录 {
   
   {$labels.mountpoint}} 磁盘inodes使用率过高 当前值为 {
   
   {$value}}%"
      - alert: CPU空闲时间百分比
        expr: ceil(avg by (instance)(irate(node_cpu_seconds_total{job=~"^node_APP.*",mode="idle"}[1m])) * 100) < 10
        for: 2m
        annotations:
          project: 项目
          summary: "{
   
   {$labels.instance}}:CPU空闲时间百分比"
          description: "CPU空闲时间百分比过低 当前值为 {
   
   {$value}}%"
      - alert: CPU等待输入时间百分比
        expr: ceil(avg by (instance)(irate(node_cpu_seconds_total{job=~"^node_APP.*",mode="iowait"}[1m])) * 100) > 80
        for: 2m
        annotations:
          project: 项目
          summary: "{
   
   {$labels.instance}}:CPU等待输入时间百分比"
          description: "CPU等待输入时间百分比过高 当前值为 {
   
   {$value}}%"
          

- alert 定义报警名称
  expr: PromQL查询语句
  for: 触发报警间隔时间 报警状态在第一次改变后经过间隔时间再检测依旧异常 就会发出报警

PromQL查询支持的匹配
node_load1{job="node_APP_game"}  	精确匹配job=node_APP_game
node_load1{job=~".*game.*"}			模糊匹配job里面包含game
node_load1{job=~".*game"}			模糊匹配job里面包含game且已game结尾
node_load1{job=~"^node_APP.*"}		模糊匹配job里面已包含node_APP且已node_APP开头 ^其实可以不用
node_load1{job!~".*APP.*"}			模糊配置job不包含 其他的依次类推
node_load1{job=~"^node_APP_(gate|game|logic|other|redis)"} 模糊匹配job为node_APP_gate或node_APP_game等
一般都是使用job和instance来进行配置 
cat /usr/local/prometheus/rule/APP/redis_rules.yml 
groups:
  - name: redis_rule
    rules:
      - alert: 实例redis_exporter进程状态
        expr: up{job=~"redis_exporter.*APP"} == 0
        for: 1m
        annotations:
          project: 项目
          summary: "{
   
   {$labels.instance}}:Targets is down"
          description: "{
   
   {$labels.job}} 的 redis_exporter 进程已经 down."
      - alert: redis运行状态
        expr: redis_up{job=~"redis_exporter.*APP"} == 0
        for: 1m
        annotations:
          project: 项目
          summary: "{
   
   {$labels.instance}}:redis进程运行状态"
          description: "{
   
   {$labels.job}} 的 redis 进程已经 down."
      - alert: redis1-master update slave
        expr: redis_instance_info{job="redis_exporter_redis1-APP",role="slave"} == 1
        for: 1m
        annotations:
          project: 项目
          summary: "{
   
   {$labels.instance}}:redis主从切换"
          description: "{
   
   {$labels.job}} 的主节点变为了从节点."
      - alert: redis4-slave update master
        expr: redis_instance_info{job="redis_exporter_redis4-APP",role="master"} == 1
        for: 1m
        annotations:
          project: 项目
          summary: "{
   
   {$labels.instance}}:redis主从切换"
          description: "{
   
   {$labels.job}} 的从节点变为了主节点."
cat /usr/local/prometheus/rule/APP/blackbox_tcp_rule.yml 
groups:
  - name: blackbox_rule
    rules:
      - alert: 实例blackbox_exporter进程状态
        expr: up{job=~"^blackbox_APP.*"} == 0
        for: 1m
        annotations:
          project: 项目
          summary: "{
   
   {$labels.instance}}:Targets is down"
          description: "{
   
   {$labels.instance}} 的 blackbox_exporter 进程已经 down."
      - alert: 端口状态
        expr: probe_success{job=~"^blackbox_APP.*"} == 0
        for: 1m
        annotations:
          project: 项目
          summary: "{
   
   {$labels.instance}}:端口状态"
          description: "{
   
   {$labels.instance}} 的端口已经 down."
prometheus服务reload和snapshot脚本
cat reload_prometheus.sh 
#!/bin/bash
curl -X POST http://localhost:9090/-/reload

在修改过rule下配置文件 或是 prometheus.yml文件内容 可以执行此脚本reload服务
cat snapshot_prometheus_data.sh 
find /usr/local/prometheus/data/snapshots/ -maxdepth 1 -ctime +10 -exec rm -rf {} \;
curl -XPOST http://localhost:9090/api/v1/admin/tsdb/snapshot?skip_head=false

删除之前的快照数据和执行快照
skip_head=false 执行快照不跳过内存中的数据
此脚本可以放在contab中定时执行
检测prometheus.yml文件格式是否正确
./promtool check config prometheus.yml
启动服务
systemctl start prometheus.service
pushgateway向prometheus推送指标
安装有pushgateway的主机上
定时任务里面添加如下内容
* * * * * bash /usr/local/pushgateway/pushgateway.sh >/dev/null &

通过shell脚本定时向prometheus推送指标
cat /usr/local/pushgateway/pushgateway.sh 
#!/bin/bash
PATH=/sbin:/usr/sbin:/usr/local/sbin:/opt/gnome/sbin:/usr/local/bin:/usr/bin:/usr/X11R6/bin:/bin:/usr/games:/opt/gnome/bin:/usr/lib/mit/bin:/usr/lib/mit/sbin
export PATH
time=`date +%H:%M`

if [ -f "/var/lock/subsys/iptables" ];then
  echo "iptables_up 1" > /tmp/pushgateway.txt 
else
  echo "iptables_up 0" > /tmp/pushgateway.txt
fi

tail -n400 /usr/local/nginx/logs/access.log | grep $time | awk '{ print $7 }' | sort | uniq -c | sort -nr -k 1 | head -n 1 | awk '{ print $1,$2 }' | grep -q [0-9]
if [ $? -eq 0 ];then
  msg=`tail -n400 /usr/local/nginx/logs/access.log | grep $time | awk '{ print $7 }' | sort | uniq -c | sort -nr -k 1 | head -n 1 | awk '{ print $1,$2 }'`
  num=`echo $msg | awk '{ print $1 }'`
  url=`echo $msg | awk '{ print $2 }'`
  echo "nginx_access_info{url=\""$url"\"} $num" >> /tmp/pushgateway.txt
else
  echo "nginx_access_info{url=\""/"\"} 0" >> /tmp/pushgateway.txt
fi

curl -XPOST --data-binary @/tmp/pushgateway.txt http://127.0.0.1:9091/metrics/job/pushgateway

文本的内容范例如下
cat /tmp/pushgateway.txt 
iptables_up 1
nginx_access_info{url="/hf-serverconfig/api/v1/security/config/channels/vivo"} 5

通过curl 将文本的内容发送给pushgateway
curl -XPOST --data-binary @/tmp/pushgateway.txt http://127.0.0.1:9091/metrics/job/pushgateway

通过curl 直接将内容发送给pushgateway
echo "iptables_up 0" | curl --data-binary @- http://127.0.0.1:9091/metrics/job/pushgateway
prometheus server
添加job

  - job_name: 'pushgateway_APP'
    #honor_labels: true
    file_sd_configs:
    - refresh_interval: 15s
      files:
      - "/usr/local/prometheus/app_pushgateway_config/*.yml"
      
这样在即可通过PromQL查询自定义监控指标
grafana展示prometheus数据
在 grafana 设置里面添加 data source

linux主机监控模板: 9276

此模板可能带宽的图形有问题 请按照下面进行修改
irate(node_network_receive_bytes_total{instance=~'$node',device != 'lo'}[5m])*8

各分区可用空间
node_filesystem_size_bytes{instance=~'$node',fstype=~"ext4|xfs|tmpfs|ext3"}-node_filesystem_avail_bytes {instance=~'$node',fstype=~"ext4|xfs|tmpfs|ext3"}

redis监控模板: 763

QPS
rate(redis_commands_processed_total{instance=~"$instance"}[1m])
Command Calls / sec
topk(5, irate(redis_commands_total{instance=~"$instance"} [1m]))
命令耗时微秒 只取了部分主要命令
increase(redis_commands_duration_seconds_total{cmd=~"get|set|hget|hset|exists|hexists|zincrby|zadd|del",instance=~"$instance"}[1m])/increase(redis_commands_total{cmd=~"get|set|hget|hset|exists|hexists|zincrby|zadd|del",instance=~"$instance"}[1m])*1000000

mysql监控模板: 7362
node_exporter和redis_exporter获取的监控指标都很多 模板中只是涉及到了部分指标可以自行在添加
也可以自己制作监控盘来进行服务器之间相同指标的对比

数据采集好了 能否制作一个漂亮的大盘 就看个人了

4.prometheus联邦集群

联邦集群简介
联邦集群是将不同的prometheus server上的监控数据汇集到一台prometheus上
我们可以在不同的可用区部署不同的prometheus然后将数据汇集到一台prometheus上
也可以在同一个可用区部署多个prometheus采集不同主机的指标汇集到一台prometheus上

我这里有不同可用区的服务器内网不互通 同一个内网区域我部署了一个prometheus采集数据 也可以通过外网采集 就不需要联邦集群

汇集到一台prometheus上 我们只需要部署一套grafana alertmanager PrometheusAlert
primary prometheus 收集其他prometheus采集的数据
worker prometheus 普通的prometheus server

我们之前部署的prometheus充当primary角色 不仅收集其他prometheus采集的数据 也采集自己可用区主机的监控数据
在另外一个可用区新部署了prometheus来采集当前可用区主机的监控数据
worker节点配置
worker prometheus的prometheus.yml文件
配置文件无需配置 这2项内容
alerting:
rule_files:

添加的job_name格式跟之前的一样 没有变化
  - job_name: 'node_by2_game'
    file_sd_configs:
    - refresh_interval: 15s
      files:
      - "/usr/local/prometheus/by2_node_config/game/*.yml"
primary节点配置
primary prometheus的prometheus.yml文件

当worker节点添加了新的job后 此节点配置也要添加新的job
  - job_name: 'node_by2_game'
    honor_labels: true
    metrics_path: /federate
    params:
      match[]:
        - '{job="node_by2_game"}'
    static_configs:
      - targets: ['IP:9090']
      
为了便于识别primary节点和worker节点的job_name定义的一样

IP:9090					为worker节点prometheus server的地址
job="node_by2_game" 	我们此处使用的精确匹配 经测试当使用模糊匹配 匹配到worker节点的多个与node相关的job时 在grafana展示的主机监控项里面的分组会出现异常 不能展示worker节点监控的具体主机的基础指标(其监控数据都已经汇集到了primary上只是grafana的展示有异常)
因为grafana展示数据分组其变量是通过job_name来识别的 而job_name下又包含work节点的job_name其显示就有问题了

其他相关的job都类似
  - job_name: 'node_by2_redis'
    honor_labels: true
    metrics_path: /federate
    params:
      match[]:
        - '{job="node_by2_redis"}'
    static_configs:
      - targets: ['IP:9090']
primary节点已经获取了worker节点的监控指标
在primary节点的prometheus域名下即可执行PromQL语句查询监控指标
只需要在primary节点rule下添加新的yml进行预警

5.主要的参考文档

prometheus

https://yunlzheng.gitbook.io/prometheus-book/

https://github.com/prometheus

各类exporter

https://prometheus.io/docs/instrumenting/exporters/#databases

PrometheusAlert

https://feiyu563.gitbook.io/prometheusalert/

prometheus还有一个重要的插件 pushgateway
prometheus是通过pull模式从各个主机上拉取的监控指标
pushgateway 则是部署在被监控主机上 向prometheus push监控指标 这个指标需要自己定义 其使用方法可自行查询

6.附件

rule规则

node_rules.yml
groups:
  - name: node_rule
    rules:
      - alert: 实例node进程状态
        expr: up{job=~"^node_APP.*"} == 0
        for: 1m
        annotations:
          project: 项目
          summary: "{
   
   {$labels.instance}}:Targets is down"
          description: "{
   
   {$labels.job}} 已经 down."
      - alert: 内存使用率
        expr: ceil((node_memory_MemTotal_bytes{job=~"^node_APP_(gate|game|logic|other)"} - (node_memory_MemFree_bytes{job=~"^node_APP_(gate|game|logic|other)"}+node_memory_Buffers_bytes{job=~"^node_APP_(gate|game|logic|other)"}+node_memory_Cached_bytes{job=~"^node_APP_(gate|game|logic|other)"} )) / node_memory_MemTotal_bytes{job=~"^node_APP_(gate|game|logic|other)"} * 100) > 90
        for: 2m
        annotations:
          project: 项目
          summary: "{
   
   {$labels.instance}}:内存使用率超过85%"
          description: "内存使用率过高 当前值为 {
   
   {$value}}%"
      - alert: redis内存使用率
        expr: ceil((node_memory_MemTotal_bytes{job="node_APP_redis"} - (node_memory_MemFree_bytes{job="node_APP_redis"}+node_memory_Buffers_bytes{job="node_APP_redis"}+node_memory_Cached_bytes{job="node_APP_redis"} )) / node_memory_MemTotal_bytes{job="node_APP_redis"} * 100) > 60
        for: 2m
        annotations:
          project: 项目
          summary: "{
   
   {$labels.instance}}:内存使用率超过60%"
          description: "内存使用率过高 当前值为 {
   
   {$value}}%"
      - alert: 剩余内存小于1G
        expr: node_memory_MemAvailable_bytes{job=~"^node_APP_(gate|game|logic|other)"} / (1024*1024*1024)  < 1
        for: 2m
        annotations:
          project: 项目
          summary: "{
   
   {$labels.instance}}:剩余内存小于1G"
          description: "剩余内存过小 当前值为 {
   
   {$value}}G"
      - alert: 剩余内存小于512M
        expr: node_memory_MemAvailable_bytes{job=~"^node_APP_(gate|game|logic|other)"} / (1024*1024)  < 512
        for: 2m
        annotations:
          project: 项目
          summary: "{
   
   {$labels.instance}}:剩余内存小于512M"
          description: "剩余内存过小 当前值为 {
   
   {$value}}M"
      - alert: redis剩余内存
        expr: node_memory_MemAvailable_bytes{job="node_APP_redis"} / (1024*1024*1024)  < 10
        for: 2m
        annotations:
          project: 项目
          summary: "{
   
   {$labels.instance}}:剩余内存小于10G"
          description: "剩余内存过小 当前值为 {
   
   {$value}}G "
      - alert: gate load5负载高于cpu核数的2倍
        expr: sum by (instance) (node_load5{job=~"^node_APP_gate.*"}) > count by(instance) (count by(instance, cpu) (node_cpu_seconds_total{job=~"^node_APP_gate.*",mode="system"})) * 2
        for: 2m
        annotations:
          project: 项目
          summary: "{
   
   {$labels.instance}}:gate load5 高于 CPU核数的2倍"
          description: "gate load5 高于 CPU核数的2倍 当前值为 {
   
   {$value}}"
      - alert: load5负载高于cpu核数的1.5倍
        expr: sum by (instance) (node_load5{job=~"^node_APP_(game|logic|other)"}) > count by(instance) (count by(instance, cpu) (node_cpu_seconds_total{job=~"^node_APP_(game|logic|other)",mode="system"})) * 1.5
        for: 2m
        annotations:
          project: 项目
          summary: "{
   
   {$labels.instance}}:load5 高于 CPU核数的1.5倍"
          description: "load5 高于 CPU核数的1.5倍 当前值为 {
   
   {$value}}"
      - alert: load1负载高于cpu核数的3倍
        expr: sum by (instance) (node_load1{job=~"^node_APP_(gate|game|logic|other)"}) > count by(instance) (count by(instance, cpu) (node_cpu_seconds_total{job=~"^node_APP_(gate|game|logic|other)",mode="system"})) * 3
        for: 2m
        annotations:
          project: 项目
          summary: "{
   
   {$labels.instance}}:load1 高于 CPU核数的3倍"
          description: "load1 高于 CPU核数的3倍 当前值为 {
   
   {$value}}"
      - alert: redis load1
        expr: node_load1{job="node_APP_redis"}  > 6
        for: 2m
        annotations:
          project: 项目
          summary: "{
   
   {$labels.instance}}:load1 大于 6"
          description: "load1 大于 6 当前值为 {
   
   {$value}}"
      - alert: TCP ESTABLISHED 连接数
        expr: node_netstat_Tcp_CurrEstab{job=~"^node_APP_(gate|game|logic|other)"} > 8000
        for: 2m
        annotations:
          project: 项目
          summary: "{
   
   {$labels.instance}}:TCP ESTABLISHED 连接数大于8000"
          description: "TCP ESTABLISHED 连接数大于8000 当前值为 {
   
   {$value}}"
      - alert: redis TCP ESTABLISHED 连接数
        expr: node_netstat_Tcp_CurrEstab{job="node_APP_redis",instance=~"^redis.*"} > 10000
        for: 2m
        annotations:
          project: 项目
          summary: "{
   
   {$labels.instance}}:TCP ESTABLISHED 连接数大于10000"
          description: "TCP ESTABLISHED 连接数大于10000 当前值为 {
   
   {$value}}"
      - alert: TCP TIME_WAIT 连接数
        expr: node_sockstat_TCP_tw{job=~"^node_APP_(gate|game|logic|other|redis)"} > 5000
        for: 2m
        annotations:
          project: 项目
          summary: "{
   
   {$labels.instance}}:TCP TIME_WAIT 连接数大于5000"
          description: "TCP TIME_WAIT 连接数大于5000 当前值为 {
   
   {$value}}"
      - alert: 磁盘空间使用率
        expr: ceil((1-(node_filesystem_avail_bytes{job=~"node_APP.*"} / node_filesystem_size_bytes{job=~"node_APP.*"}))*100) > 85
        for: 5m
        annotations:
          project: 项目
          summary: "{
   
   {$labels.instance}}:磁盘空间使用率大于85"
          description: "目录 {
   
   {$labels.mountpoint}} 磁盘使用率过高 当前值为 {
   
   {$value}}%"
      - alert: 磁盘inodes使用率
        expr: ceil((1-(node_filesystem_files_free{job=~"^node_APP.*"} / node_filesystem_files{job=~"^node_APP.*"}))*100) > 80
        for: 5m
        annotations:
          project: 项目
          summary: "{
   
   {$labels.instance}}:磁盘inodes使用率"
          description: "目录 {
   
   {$labels.mountpoint}} 磁盘inodes使用率过高 当前值为 {
   
   {$value}}%"
      - alert: CPU空闲时间百分比
        expr: ceil(avg by (instance)(irate(node_cpu_seconds_total{job=~"^node_APP.*",mode="idle"}[1m])) * 100) < 10
        for: 2m
        annotations:
          project: 项目
          summary: "{
   
   {$labels.instance}}:CPU空闲时间百分比"
          description: "CPU空闲时间百分比过低 当前值为 {
   
   {$value}}%"
      - alert: CPU等待输入时间百分比
        expr: ceil(avg by (instance)(irate(node_cpu_seconds_total{job=~"^node_APP.*",mode="iowait"}[1m])) * 100) > 80
        for: 2m
        annotations:
          project: 项目
          summary: "{
   
   {$labels.instance}}:CPU等待输入时间百分比"
          description: "CPU等待输入时间百分比过高 当前值为 {
   
   {$value}}%"
redis_rules.yml
groups:
  - name: redis_rule
    rules:
      - alert: 实例redis_exporter进程状态
        expr: up{job=~"redis_exporter.*APP"} == 0
        for: 1m
        annotations:
          project: 项目
          summary: "{
   
   {$labels.instance}}:Targets is down"
          description: "{
   
   {$labels.job}} 的 redis_exporter 进程已经 down."
      - alert: redis运行状态
        expr: redis_up{job=~"redis_exporter.*APP"} == 0
        for: 1m
        annotations:
          project: 项目
          summary: "{
   
   {$labels.instance}}:redis进程运行状态"
          description: "{
   
   {$labels.job}} 的 redis 进程已经 down."
      - alert: redis主节点变成从节点
        expr: redis_instance_info{job=~"redis_exporter_redis[1-3]-APP",role="slave"} == 1
        for: 1m
        annotations:
          project: 项目
          summary: "{
   
   {$labels.instance}}:redis主从切换"
          description: "{
   
   {$labels.job}} 的主节点变为了从节点."
      - alert: redis从节点变成主节点
        expr: redis_instance_info{job=~"redis_exporter_redis[4-6]-APP",role="master"} == 1
        for: 1m
        annotations:
          project: 项目
          summary: "{
   
   {$labels.instance}}:redis主从切换"
          description: "{
   
   {$labels.job}} 的从节点变为了主节点."
      - alert: redis slave 备份中断
        expr:  (time() - redis_rdb_last_save_timestamp_seconds{job=~"redis_exporter_redis.*APP"}) * on(instance) group_left(role) (redis_instance_info{role="slave"}) > 60 * 60
        for: 1m
        annotations:
          project: 项目
          summary: "{
   
   {$labels.instance}}:redis slave 备份中断"
          description: "{
   
   {$labels.job}} 的 redis slave 已经超过 {
   
   {$value}} 秒 未生成rdb文件"
blackbox_tcp_rule.yml
groups:
  - name: blackbox_rule
    rules:
      - alert: 实例blackbox_exporter进程状态
        expr: up{job=~"^blackbox_APP.*"} == 0
        for: 1m
        annotations:
          project: 项目
          summary: "{
   
   {$labels.instance}}:Targets is down"
          description: "{
   
   {$labels.instance}} 的 blackbox_exporter 进程已经 down."
      - alert: 端口状态
        expr: probe_success{job=~"^blackbox_APP.*"} == 0
        for: 1m
        annotations:
          project: 项目
          summary: "{
   
   {$labels.instance}}:端口状态"
          description: "{
   
   {$labels.instance}} 的端口已经 down."
      - alert: http_status_code
        expr: probe_http_status_code{job="blackbox_https_itunes"} != 200
        for: 1m
        annotations:
          project: 项目
          summary: "{
   
   {$labels.instance}}: http_status_code"
          description: "{
   
   {$labels.instance}} http_status_code is not 200."
      - alert: probe_duration_seconds
        expr: probe_duration_seconds{job="blackbox_http"} > 5
        for: 1m
        annotations:
          project: 项目
          summary: "{
   
   {$labels.instance}}: probe_duration_seconds"
          description: "{
   
   {$labels.instance}} 响应时间 大于 5s 当前值为 {
   
   {$value}} s."
      - alert: probe_ssl_earliest_cert_expiry
        expr: ceil((probe_ssl_earliest_cert_expiry-time())/(60*60*24)) < 7
        for: 1m
        annotations:
          project: 项目
          summary: "{
   
   {$labels.instance}}: probe_ssl_earliest_cert_expiry"
          description: "{
   
   {$labels.instance}} 的证书有效期小于7天."
pushgateway_rules.yml
groups:
  - name: pushgateway_rule
    rules:
      - alert: 实例pushgateway进程状态
        expr: up{job="pushgateway_APP"} == 0
        for: 1m
        annotations:
          project: 项目
          summary: "{
   
   {$labels.instance}}:Targets is down"
          description: "{
   
   {$labels.instance}} 的 pushgateway 进程已经 down."
      - alert: iptables 运行状态
        expr: iptables_up{job="pushgateway_APP"} == 0
        for: 1m
        annotations:
          project: 项目
          summary: "{
   
   {$labels.instance}}:iptables进程运行状态"
          description: "{
   
   {$labels.instance}} 的 iptables 进程已经关闭."
      - alert: url 访问过于频繁
        expr: nginx_access_info > 100
        for: 1m
        annotations:
          project: 项目
          summary: "{
   
   {$labels.instance}}:url访问过于频繁"
          description: "{
   
   {$labels.instance}} 上 url {
   
   {$labels.url}} 被访问过于频繁 一分钟内被访问了 {
   
   {$value}} 次."

添加标签

 将node获取的指标添加一个hostname的指标
 
 relabel_configs:
      - source_labels: [__address__]
        regex: '([a-zA-z].*):9[0-9]{3}'
        target_label: hostname
        replacement: $1
        action: replace

猜你喜欢

转载自blog.csdn.net/qq_26489043/article/details/112670402