1. Install prometheus service
1)下载软件包
#wget https://github.com/prometheus/prometheus/releases/download/v2.15.2/prometheus-2.15.2.linux-amd64.tar.gz
注: wget 没安装 使用 #yum install wget -y
2)解压软件包
#tar -zxvf prometheus-2.15.2.linux-amd64.tar.gz
#mkdir /usr/local/prometheus
#mv prometheus-2.15.2.linux-amd64 /usr/local/prometheus
3)修改配置文件
#vim /usr/local/prometheus/prometheus-2.15.2.linux-amd64/prometheus.yml
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
#可以修改页面访问端口
- targets: ['172.17.0.48:9090']
4)修改文件权限
# chmod -R 777 /usr/local/prometheus/prometheus-2.15.2.linux-amd64/*
5)设置开机启动
#touch /usr/lib/systemd/system/prometheus.service
#vi /usr/lib/systemd/system/prometheus.service
[Unit]
Description=Prometheus
Documentation=https://prometheus.io/docs/introduction/overview/
Wants=network-online.target
After=network-online.target
[Service]
User=root
Group=root
Type=simple
ExecStart=/usr/local/prometheus/prometheus-2.15.2.linux-amd64/prometheus --config.file=/usr/local/prometheus/prometheus-2.15.2.linux-amd64/prometheus.yml
[Install]
WantedBy=multi-user.target
6)设置开机启动
# systemctl enable prometheus
# systemctl start Prometheus
7)登录
http://172.17.0.48:9090
2. The client installs node_exporter
1)下载压缩包
#wget https://github.com/prometheus/node_exporter/releases/download/v0.18.1/node_exporter-0.18.1.linux-amd64.tar.gz
2)解压缩
#tar -zxvf node_exporter-0.18.1.linux-amd64.tar.gz
#mkdir /usr/local/node_exporter/
#mv node_exporter-0.18.1.linux-amd64 /usr/local/node_exporter/
3)设置开机启动
#vim /usr/lib/systemd/system/node_exporter.service
[Unit]
Description=node_exporter
Documentation=https://prometheus.io/
After=network.target
[Service]
Type=simple
User=root
ExecStart=/usr/local/node_exporter/node_exporter-0.18.1.linux-amd64/node_exporter
Restart=on-failure
[Install]
WantedBy=multi-user.target
#systemctl enable node_exporter
#systemctl start node_exporter
4)设置iptables
#vim /etc/sysconfig/iptables
-A INPUT -p tcp -m state --state NEW -m tcp --dport 9100 -j ACCEPT
#systemctl restart iptables
5)修改prometheus服务器配置文件并重启
#vim /usr/local/prometheus/prometheus-2.15.2.linux-amd64/prometheus.yml
新增:
- job_name: 'agent'
static_configs:
- targets: ['172.17.0.47:9100']
#systemctl restart prometheus
3. Install mysql_exporter (installed on the mysql server to be monitored)
1) Download the compressed package
#wget https://github.com/prometheus/mysqld_exporter/releases/download/v0.12.1/mysqld_exporter-0.12.1.linux-amd64.tar.gz
2) Unzip
#tar -zxvf mysqld_exporter-0.12.1.linux-amd64.tar.gz
#mkdir /usr/local/mysql_exporter/
#mv mysqld_exporter-0.12.1.linux-amd64 /usr/local/mysql_exporter/
#chmod +X /usr/local/mysql_exporter
3) mysql_exporter connects to mysql
mysql> GRANT REPLICATION CLIENT,PROCESS ON *.* TO 'root'@'localhost' identified by '123456';
mysql> GRANT SELECT ON *.* TO 'root'@'localhost';
mysql> flush privileges;
4) Create my.cnf
[client]
user = root
password = 123456
5) Set the boot
#vim /usr/lib/systemd/system/mysql_exporter.service
[Unit]
Description=mysql_exporter
Documentation=https://prometheus.io/
After=network.target
[Service]
Type=simple
User=root
ExecStart=/usr/local/mysql_exporter/mysqld_exporter-0.12.1.linux-amd64/mysqld_exporter --config.my-cnf=/usr/local/mysql_exporter/mysqld_exporter-0.12.1.linux-amd64/my.cnf
[Install]
WantedBy=multi-user.target
#systemctl enable node_exporter
#systemctl start node_exporter
6) Modify the prometheus server configuration file and restart
#vim /usr/local/prometheus/prometheus-2.15.2.linux-amd64/prometheus.yml
新增:
- job_name: 'mysql'
static_configs:
- targets: ['172.17.0.10:9104']
#systemctl restart prometheus
4. Monitor K8S
In the form of Prometheus federation, Prometheus outside the k8s cluster pulls monitoring data from Prometheus in the k8s cluster, and external Prometheus is the storage of monitoring data. The data storage layer of Prometheus deployed in the k8s cluster can simply use emptyDir, and the data can only be kept for 24 hours (or less). The Prometheus instance deployed on the k8s cluster can be safely placed on the cluster nodes even if it fails Drifting.
1) Create a namespace named ns-monitor
#创建namespace.yml
apiVersion: v1
kind: Namespace
metadata:
name: ns-monitor
labels:
name: ns-monitor
#执行yml文件
kubectl apply -f namespace.yaml
2) Deploy node-exporter in k8s
Node-exporter is used to collect physical indicators of each node in the kubernetes cluster, such as: Memory, CPU, etc. It can be installed directly on each physical node. Here we use DaemonSet to deploy to each node, use hostNetwork: true and hostPID: true to obtain the physical index information of Node, configure tolerations to start a pod on the master node. .
#创建node-exporter.yml文件
kind: DaemonSet
apiVersion: apps/v1beta2
metadata:
labels:
app: node-exporter
name: node-exporter
namespace: ns-monitor
spec:
revisionHistoryLimit: 10
selector:
matchLabels:
app: node-exporter
template:
metadata:
labels:
app: node-exporter
spec:
containers:
- name: node-exporter
image: prom/node-exporter:v0.16.0
ports:
- containerPort: 9100
protocol: TCP
name: http
hostNetwork: true
hostPID: true
tolerations:
- effect: NoSchedule
operator: Exists
---
kind: Service
apiVersion: v1
metadata:
labels:
app: node-exporter
name: node-exporter-service
namespace: ns-monitor
spec:
ports:
- name: http
port: 9100
nodePort: 31672
protocol: TCP
type: NodePort
selector:
app: node-exporter
#执行创建命令
kubectl apply -f node-exporter.yml
#查看是否创建成功
kubectl get pods -n ns-monitor -o wide
3-1) Create and edit rabc.yml
rbac.yml defines the ServiceAccount, ClusterRole, and ClusterRoleBinding required by the Prometheus container to access the k8s apiserver
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: prometheus
rules:
- apiGroups: [""]
resources:
- nodes
- nodes/proxy
- services
- endpoints
- pods
verbs: ["get", "list", "watch"]
- apiGroups:
- extensions
resources:
- ingresses
verbs: ["get", "list", "watch"]
- nonResourceURLs: ["/metrics"]
verbs: ["get"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: prometheus
namespace: ns-monitor
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: prometheus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus
subjects:
- kind: ServiceAccount
name: prometheus
namespace:ns-monitor
3-2) Create and edit configmap.yml to configure the prometheus configuration file in configmap.
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-config
namespace: ns-monitor
data:
prometheus.yml: |
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: 'kubernetes-apiservers'
kubernetes_sd_configs:
- role: endpoints
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: default;kubernetes;https
- job_name: 'kubernetes-nodes'
kubernetes_sd_configs:
- role: node
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics
- job_name: 'kubernetes-cadvisor'
kubernetes_sd_configs:
- role: node
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
- job_name: 'kubernetes-service-endpoints'
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
action: replace
target_label: __scheme__
regex: (https?)
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
action: replace
target_label: __address__
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
action: replace
target_label: kubernetes_name
- job_name: 'kubernetes-services'
kubernetes_sd_configs:
- role: service
metrics_path: /probe
params:
module: [http_2xx]
relabel_configs:
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe]
action: keep
regex: true
- source_labels: [__address__]
target_label: __param_target
- target_label: __address__
replacement: blackbox-exporter.example.com:9115
- source_labels: [__param_target]
target_label: instance
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
target_label: kubernetes_name
- job_name: 'kubernetes-ingresses'
kubernetes_sd_configs:
- role: ingress
relabel_configs:
- source_labels: [__meta_kubernetes_ingress_annotation_prometheus_io_probe]
action: keep
regex: true
- source_labels: [__meta_kubernetes_ingress_scheme,__address__,__meta_kubernetes_ingress_path]
regex: (.+);(.+);(.+)
replacement: ${1}://${2}${3}
target_label: __param_target
- target_label: __address__
replacement: blackbox-exporter.example.com:9115
- source_labels: [__param_target]
target_label: instance
- action: labelmap
regex: __meta_kubernetes_ingress_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_ingress_name]
target_label: kubernetes_name
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: kubernetes_pod_name
3-3) Prometheus-deploy.yml defines the deployment of Prometheus
apiVersion: apps/v1beta2
kind: Deployment
metadata:
labels:
name: prometheus-deployment
name: prometheus
namespace: ns-monitor
spec:
replicas: 1
selector:
matchLabels:
app: prometheus
template:
metadata:
labels:
app: prometheus
spec:
containers:
- image: harbor.frognew.com/prom/prometheus:2.0.0
name: prometheus
command:
- "/bin/prometheus"
args:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus"
- "--storage.tsdb.retention=24h"
ports:
- containerPort: 9090
protocol: TCP
volumeMounts:
- mountPath: "/prometheus"
name: data
- mountPath: "/etc/prometheus"
name: config-volume
resources:
requests:
cpu: 100m
memory: 100Mi
limits:
cpu: 500m
memory: 2500Mi
serviceAccountName: prometheus
imagePullSecrets:
- name: regsecret
volumes:
- name: data
emptyDir: {}
- name: config-volume
configMap:
name: prometheus-config
3-4) Prometheus-svc.yml defines Prometheus Servic
Prometheus needs to be exposed to the outside of the cluster as NodePort, LoadBalancer or using Ingress, so that external Prometheus can access it
kind: Service
apiVersion: v1
metadata:
labels:
app: prometheus
name: prometheus
namespace: ns-monitor
spec:
type: NodePort
ports:
- port: 9090
targetPort: 9090
nodePort: 30003
selector:
app: prometheus
3-5) Use yml file to create object
kubectl create -f rbac.yml
kubectl create -f configmap.yml
kubectl create -f prometheus-deploy.yml
kubectl create -f prometheus-svc.yml
4) Configure Prometheus Federation
After completing the deployment of Prometheus on the Kubernetes cluster, the following will configure Prometheus outside the cluster to pull data from Prometheus inside the cluster. In fact, just add a job in the form of static configuration
- job_name: 'federate'
scrape_interval: 15s
honor_labels: true
metrics_path: '/federate'
params:
'match[]':
- '{job=~"kubernetes-.*"}'
static_configs:
- targets:
- '<nodeip>:30003'
5) Configure pushgateway
1、docker安装
#systemctl start docker
#systemctl enable docker
#docker pull prom/pushgateway
#docker run -d -p 9091:9091 prom/pushgateway
2、普通安装