1 Installation and optimization of prometheus

One installation

1 Download, start

The download address of prometheus
https://prometheus.io/download/#prometheusAfter
downloading, unzip and copy to / usr / local

[root@server04 down]# tar -xvzf prometheus-2.2.0-rc.0.linuxamd64.tar.gz 
cp -r prometheus-2.2.0-rc.0.linux-amd64 /usr/local/

./promehteus #直接运行

2 Start in the background

Use daemonize to put in the background mode, daemonize Unix system background daemon management software
needs to download and install daemonize


git clone https://github.com/bmc/daemonize.git sh configure && make && sudo make install 
用daemonize去启动prometheus服务

daemonize -c /data/prometheus/ /data/prometheus/up.sh

After running the above command, a data directory will be generated in the / data / prometheus / directory. The next restart needs to specify this directory
-c is the specified operation path /data/prometheus/up.sh is the operation path A startup script of the
script is as follows: you
need to put the decompressed prometheus under the / data / prometheus directory

/data/prometheus/prometheus/prometheus  --config.file="/data/prometheus/prometheus/prometheus.yml"

3 configuration file

[root@k8s-node1 prometheus]# grep -v "^#" /data/prometheus/prometheus/prometheus.yml
global:
  scrape_interval:     15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

alerting:
  alertmanagers:
  - static_configs:
    - targets:
      # - alertmanager:9093

rule_files:
  # - "first_rules.yml"
  # - "second_rules.yml"

scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'prometheus'

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
    - targets: ['localhost:9090']

One is how many time intervals the global variable scrape_interval sets to collect data

The other is to write a job and targets. Configure a job label, and then define the machine we need to monitor under this label.

4 node_exporter installation and background operation

4.1 Client configuration
Download address is https://prometheus.io/download/#node_exporter The
default operation
starts on port 9100 :

daemonize -c /data/node_exporter/ /data/node_exporter/up.sh

Script content:

cat up.sh
/prometheus/node_exporter/node_exporter --web.listen-address=":9200"

Query data:
curl localhost: 9100 / metrics

4.2 Server configuration

● The monitoring server changes the configuration file: the following 9200 is the default port number of node_export

Then go to the web page to view, click Status-"Point Targets-"

Two Pushushgatway installation and operation and configuration

2.1 Monitoring server configuration (prometheus server)

1.211 is the server according to pushgatway
and then restart the prometheus server

2.2 Operation of the monitored terminal

The download address is https://prometheus.io/download/#pushgateway. After decompression, the
default port is 9091. After the
startup is complete, you can visit http://192.168.1.211:9091/#

Write monitoring scripts to collect the number of waiting connections

#!/bin/bash 
instance_name=`hostname -f | cut -d'.' -f1` #本机机器名变量用于之后的标签 
if [ $instance_name == "localhost" ];then
echo "Must FQDN hostname" 
exit 1 
fi
count_netstat_wait_connections=`netstat -an | grep -i wait | wc -l` 
echo "count_netstat_wait_connections $count_netstat_wait_connections" | curl --data-binary @- http://192.168.1.211:9091/metrics/job/pushgateway/instance/$instance_name

Then set crontab * / 1 * * * * bash /prometheus/pushgateway.sh
Of course, your own script should be combined with crontab to execute regularly.
If you want to be less than 15s in one minute interval, use sleep

Then go to the monitoring page to see if the chart is generated:

url explanation:
finally push key & value to pushgatway

curl —data-binary sends the data in the HTTP POST request to the HTTP server 器 (pushgateway), which is exactly the same as the browser 器 's line 行 when the user submits the HTML form. The data in the HTTP POST request is pure binary data

http://prometheus.server.com:9091/metrics/job/pushgateway1/ instance / $ instance_name
Finally, use POST to push the key & value to the URL of pushgatway

This URL address is divided into the following three parts:
http://prometheus.server.com:9091/metrics/job/pushgateway1 
This is the main location
job / pushgateway1  of the URL
This is the first label in Part 2: Push to Which job is defined by prometheus.yml?
{Instance = “server01”}
instance / $ instance_name
This is the machine name displayed after the second label is pushed

Guess you like

Origin www.cnblogs.com/huningfei/p/12715234.html