Article directory
Prometheus data is written remotely to elasticsearch
1. Deploy elasticsearch
version: '3'
# 网桥es -> 方便相互通讯
networks:
es:
driver: bridge
services:
elasticsearch:
image: elasticsearch:7.14.1
container_name: elasticsearch # 容器名为'elasticsearch'
restart: unless-stopped # 指定容器退出后的重启策略为始终重启,但是不考虑在Docker守护进程启动时就已经停止了的容器
volumes: # 数据卷挂载路径设置,将本机目录映射到容器目录
- "./elasticsearch/data:/usr/share/elasticsearch/data"
- "./elasticsearch/logs:/usr/share/elasticsearch/logs"
- "./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml"
# - "./elasticsearch/config/jvm.options:/usr/share/elasticsearch/config/jvm.options"
- "./elasticsearch/plugins/ik:/usr/share/elasticsearch/plugins/ik" # IK中文分词插件
environment: # 设置环境变量,相当于docker run命令中的-e
TZ: Asia/Shanghai
LANG: en_US.UTF-8
TAKE_FILE_OWNERSHIP: "true" # 权限
discovery.type: single-node
ES_JAVA_OPTS: "-Xmx512m -Xms512m"
ELASTIC_PASSWORD: "123456" # elastic账号密码
ports:
- "9200:9200"
- "9300:9300"
networks:
- es
kibana:
image: kibana:7.14.1
container_name: kibana
restart: unless-stopped
volumes:
- ./elasticsearch/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml
ports:
- "5601:5601"
depends_on:
- elasticsearch
links:
- elasticsearch
networks:
- es
The configuration file and plug-in view address: https://gitee.com/huanglei1111/docker-compose/tree/master/Linux/elasticsearch
Check if it runs successfully
Access elasticsearch: ip: port
Access kibana: ip: port
Username: elastic
Password: 123456
2. Deploy prometheus
version: "3"
# 网桥 -> 方便相互通讯
networks:
prometheus:
ipam:
driver: default
config:
- subnet: "172.22.0.0/24"
services:
# 开源的系统监控和报警系统
prometheus:
image: prom/prometheus:v2.34.0
container_name: hl-prometheus
restart: unless-stopped
volumes:
- ./config/prometheus.yml:/etc/prometheus/prometheus.yml
# - "./web-config.yml:/etc/prometheus/web-config.yml"
command:
--config.file=/etc/prometheus/prometheus.yml
--web.enable-lifecycle
# --web.config.file=/etc/prometheus/web-config.yml
ports:
- "19090:9090"
depends_on:
- hl-node-exporter
networks:
prometheus:
ipv4_address: 172.22.0.11
# 采集服务器层面的运行指标
node-exporter:
image: prom/node-exporter:v1.3.1
container_name: hl-node-exporter
restart: unless-stopped
volumes:
- "./node-exporter/proc:/host/proc:ro"
- "./node-exporter/sys:/host/sys:ro"
- "./node-exporter:/rootfs:ro"
ports:
- "19100:9100"
networks:
prometheus:
ipv4_address: 172.22.0.22
# 使用prometheusbeat 把prometheus的数据存储到elasticsearch中
beat:
image: infonova/prometheusbeat
container_name: hl-prometheusbeat
ports:
- 18081:8080
depends_on:
- hl-prometheus
volumes:
- "./config/prometheusbeat.yml:/prometheusbeat.yml"
- "/etc/localtime:/etc/localtime"
networks:
prometheus:
ipv4_address: 172.22.0.33
Related configuration file viewing address: https://gitee.com/huanglei1111/docker-compose/tree/master/Linux/prometheus/prometheus-es/config
Check whether the deployment is successful
Check whether the monitoring indicators are healthy
Note: The status is down, pay attention to modify
targets
the ip in the prometheus.yml configuration file to the server ip
3. Write data to es through prometheusbeat
The above deployment docker-compose-prometheus.yml file has been deployed
If es has a user password, you need to add configuration in the prometheusbeat.yml configuration file
prometheusbeat:
# prometheusbeat 服务的端口。默认为8080
listen: ":8080"
# Context path. Defaults to /prometheus
context: "/prometheus"
output.elasticsearch:
# elasticsearch 的地址
hosts: ["127.0.0.1:9200"]
username: "elastic"
password: "123456"
Secondly, modify the prometheus.yml configuration file
Prometheus implements remote storage through remote_write, so add the remote_write parameter in the prometheus.yml configuration file
global:
scrape_interval: 10s
scrape_timeout: 10s
evaluation_interval: 10m
remote_write:
# 远程写入到prometheusbeat中
- url: "http://127.0.0.1:18081/prometheus"
write_relabel_configs:
- source_labels: [__name__]
action: keep
regex: go_gc_cycles_automatic_gc_cycles_total
remote_timeout: 30s
scrape_configs:
# prometheus
- job_name: prometheus
static_configs:
- targets: ['127.0.0.1:19090']
labels:
instance: prometheus
# 采集node exporter监控数据,即linux
- job_name: linux
static_configs:
- targets: ['127.0.0.1:19100']
labels:
instance: localhost
Configuration instructions
url: service address of prometheusbeat
write_relabel_configs: This is very useful for filtering data. You can set multiple filtering conditions, and the data that meets these conditions will be stored in remote storage!
- source_labels: according to which field to filter
- regex: rules for value filtering
- action: keep (reserve)/drop (discard)
Therefore, the configuration in the above configuration file means: after prometheus receives the data, it sends it to prometheusbeat remotely, and only sends the name go_gc_cycles_automatic_gc_cycles_total
Then restart prometheus. Afterwards, when the data is collected, it can be successfully saved to elasticsearch. The index will be automatically created when saving for the first time: prometheusbeat-7.3.1, and the data will be stored in this index.
Four, elasticsearch head verification
We installed the plug-in through the browser elasticsearch head
and found that this index has been added
reference documents
Prometheus remote storage to ElasticSearch configuration method