Prometheus uses the exporter tool to expose metrics on the host and application. Today we use node_exporter to collect various host indicator data (such as: CPU, memory and disk, etc.).
Install node_exporter
Download the installation package from the official website of Prometheus. The Linux installation package is downloaded here.
Download address: https://prometheus.io/download/Installation
package: node_exporter-0.18.1.linux-amd64.tar.gz
$ tar zxvf node_exporter-0.18.1.linux-amd64.tar.gz
$ cd node_exporter-0.18.1.linux-amd64/
$ ./node_exporter --version
node_exporter, version 0.18.1 (branch: HEAD, revision: 3db77732e925c08f675d7404a8c46466b2ece83e)
build user: root@b50852a1acba
build date: 20190604-16:41:18
go version: go1.12.5
Run node_exporter
Directly run the node_exporter command to start the service. At this time, which indicator collection is currently enabled will be printed out, as follows:
$ ./node_exporter
INFO[0000] Starting node_exporter (version=0.18.1, branch=HEAD, revision=3db77732e925c08f675d7404a8c46466b2ece83e) source="node_exporter.go:156"
INFO[0000] Build context (go=go1.12.5, user=root@b50852a1acba, date=20190604-16:41:18) source="node_exporter.go:157"
INFO[0000] Enabled collectors: source="node_exporter.go:97"
INFO[0000] - arp source="node_exporter.go:104"
INFO[0000] - bcache source="node_exporter.go:104"
INFO[0000] - bonding source="node_exporter.go:104"
INFO[0000] - conntrack source="node_exporter.go:104"
INFO[0000] - cpu source="node_exporter.go:104"
INFO[0000] - cpufreq source="node_exporter.go:104"
INFO[0000] - diskstats source="node_exporter.go:104"
INFO[0000] - edac source="node_exporter.go:104"
INFO[0000] - entropy source="node_exporter.go:104"
INFO[0000] - filefd source="node_exporter.go:104"
INFO[0000] - filesystem source="node_exporter.go:104"
INFO[0000] - hwmon source="node_exporter.go:104"
INFO[0000] - infiniband source="node_exporter.go:104"
INFO[0000] - ipvs source="node_exporter.go:104"
INFO[0000] - loadavg source="node_exporter.go:104"
INFO[0000] - mdadm source="node_exporter.go:104"
INFO[0000] - meminfo source="node_exporter.go:104"
INFO[0000] - netclass source="node_exporter.go:104"
INFO[0000] - netdev source="node_exporter.go:104"
INFO[0000] - netstat source="node_exporter.go:104"
INFO[0000] - nfs source="node_exporter.go:104"
INFO[0000] - nfsd source="node_exporter.go:104"
INFO[0000] - pressure source="node_exporter.go:104"
INFO[0000] - sockstat source="node_exporter.go:104"
INFO[0000] - stat source="node_exporter.go:104"
INFO[0000] - textfile source="node_exporter.go:104"
INFO[0000] - time source="node_exporter.go:104"
INFO[0000] - timex source="node_exporter.go:104"
INFO[0000] - uname source="node_exporter.go:104"
INFO[0000] - vmstat source="node_exporter.go:104"
INFO[0000] - xfs source="node_exporter.go:104"
INFO[0000] - zfs source="node_exporter.go:104"
INFO[0000] Listening on :9100 source="node_exporter.go:170"
After the service is started, you can visit http://:9100 through the browser to view the collected metrics.
With so many parameters above, if we don’t want to collect a certain indicator, we can use --no-collector.xxx to specify it when starting the service. For example, "./node_exporter --no-collector.zfs" specifies not to collect zfs metrics.
Configure Prometheus
After the node_exporter service is started, it needs to be added to the Prometheus configuration to make it effective. Now modify the prometheus.yml file and add it under scrape_configs
scrape_configs:
...
- job_name: 'node'
static_configs:
- targets: ['localhost:9100']
The contents of the modified and complete prometheus.yml file are as follows:
global:
scrape_interval: 15s
evaluation_interval: 15s
alerting:
alertmanagers:
- static_configs:
- targets:
# - alertmanager:9093
rule_files:
# - "first_rules.yml"
# - "second_rules.yml"
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
- job_name: 'node'
static_configs:
- targets: ['localhost:9100']
By default node_exporter will collect many indicators. We can also set in the configuration file to collect only the required indicators, such as:
...
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
- job_name: 'node'
static_configs:
- targets: ['localhost:9100']
params:
collect[]:
- cpu
- meminfo
- loadavg
- netstat
Start Prometheus
After modifying the configuration file, you need to restart the Prometheus service. After the service is started, view the monitoring information by accessing http://localhost:9090 in the browser.
At this time, we can filter to display only the newly added indicators by entering "{instance="localhost:9100",job="node"}".
For example: enter node_cpu_seconds_total{instance="localhost:9100",job="node"} to view node CPU monitoring indicators.
Element Value
node_cpu_seconds_total{
cpu="0",instance="localhost:9100",job="node",mode="idle"} 3653653.37
node_cpu_seconds_total{
cpu="0",instance="localhost:9100",job="node",mode="iowait"} 5653.09
node_cpu_seconds_total{
cpu="0",instance="localhost:9100",job="node",mode="irq"} 0
node_cpu_seconds_total{
cpu="0",instance="localhost:9100",job="node",mode="nice"} 5.95
node_cpu_seconds_total{
cpu="0",instance="localhost:9100",job="node",mode="softirq"} 155.15
node_cpu_seconds_total{
cpu="0",instance="localhost:9100",job="node",mode="steal"} 0
node_cpu_seconds_total{
cpu="0",instance="localhost:9100",job="node",mode="system"} 14571.01
node_cpu_seconds_total{
cpu="0",instance="localhost:9100",job="node",mode="user"} 16084.06