Install Grafana in the k8s cluster

In the previous course, we used Prometheus to collect some monitoring data indicators in the Kubernetes cluster. We also tried to use the promQL statement to query some data and displayed it in the Prometheus Dashboard, but it is obvious that the chart function of Prometheus is relatively Weak, so under normal circumstances, we will use a third-party tool to display these data, today we are going to use grafana with everyone.

installation

grafana is a visualization panel with very beautiful charts and layout displays, a full-featured measurement dashboard and graph editor, and supports Graphite, zabbix, InfluxDB, Prometheus, OpenTSDB, Elasticsearch, etc. as data sources, which is better than Prometheus' own chart display Too many functions, more flexible, rich plug-ins, more powerful.

Now we convert this container into a Pod in Kubernetes: (grafana-deploy.yaml)

apiVersion: apps/v1
kind: Deployment
metadata:
  name: grafana
  namespace: kube-ops
  labels:
    app: grafana
spec:
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: grafana
  template:
    metadata:
      labels:
        app: grafana
    spec:
      containers:
      - name: grafana
        image: grafana/grafana:5.3.4
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 3000
          name: grafana
        env:
        - name: GF_SECURITY_ADMIN_USER
          value: admin
        - name: GF_SECURITY_ADMIN_PASSWORD
          value: admin321
        readinessProbe:
          failureThreshold: 10
          httpGet:
            path: /api/health
            port: 3000
            scheme: HTTP
          initialDelaySeconds: 60
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 30
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /api/health
            port: 3000
            scheme: HTTP
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        resources:
          limits:
            cpu: 100m
            memory: 256Mi
          requests:
            cpu: 100m
            memory: 256Mi
        volumeMounts:
        - mountPath: /var/lib/grafana
          subPath: grafana
          name: storage
      securityContext:
        fsGroup: 472
        runAsUser: 472
      volumes:
      - name: storage
        persistentVolumeClaim:
          claimName: grafana

We used the latest mirror grafana/grafana:5.3.4, and then added monitoring checks, resource declarations, and two other important environment variables, GF_SECURITY_ADMIN_USER and GF_SECURITY_ADMIN_PASSWORD, which are used to configure the administrator user and password of grafana, because grafana will The dashboard and plug-in data are stored under the directory /var/lib/grafana, so if we need to do data persistence here, we need to make a volume mount statement for this directory. The others are no different from our previous deployment. Since the userid and groupid of grafana in the Changelog we just mentioned above have changed, we need to add a securityContext statement to make the statement.

Of course, if you want to use a pvc object to persist data, we need to add an available pv for pvc binding use: (grafana-volume.yaml)

apiVersion: v1
kind: PersistentVolume
metadata:
  name: grafana
spec:
  capacity:
    storage: 1Gi
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  nfs:
    server: 192.168.10.131
    path: /data/k8s
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: grafana
  namespace: kube-ops
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

Finally, we need to expose the grafana service externally, so we need a corresponding Service object. Of course, it is feasible to use NodePort or create an ingress object: (grafana-svc.yaml)

apiVersion: v1
kind: Service
metadata:
  name: grafana
  namespace: kube-ops
  labels:
    app: grafana
spec:
  type: NodePort
  ports:
    - port: 3000
  selector:
    app: grafana

Now we directly create these resource objects above:

$ kubectl create -f grafana-volume.yaml
persistentvolume "grafana" created
persistentvolumeclaim "grafana" created
$ kubectl create -f grafana-deploy.yaml
deployment.extensions "grafana" created
$ kubectl create -f grafana-svc.yaml
service "grafana" created

After the creation is complete, we can check whether the Pod corresponding to grafana is normal:

[root@k8s-master grafana]# kubectl get pod -n kube-ops
NAME                          READY   STATUS    RESTARTS   AGE
grafana-556d7f8c75-2slqv      1/1     Running   0          14m

View log:

kubectl logs -f grafana-556d7f8c75-2slqv -n kube-ops
t=2020-09-11T18:01:04+0000 lvl=info msg="Request Completed" logger=context userId=0 orgId=0 uname= method=POST path=/login status=401 remote_addr=10.244.2.1 time_ms=399 size=42 referer=http://192.168.10.131:31899/login


Seeing the above log information proves that our grafana Pod has been started normally. At this time we can view the Service object:

[root@k8s-master grafana]# kubectl get svc -A|grep grafana
kube-ops      grafana                   NodePort    10.1.77.211    <none>        3000:31899/TCP                20m

Now we can use http://<any node IP:31899> in the browser to access the grafana service:

Username: admin
Password: admin321

Insert picture description here
Since we have configured the administrator above, we will jump to the login interface when we open it for the first time, and then we can log in with the values ​​of the two environment variables we configured above. After the login is completed, you can enter the following Grafana's homepage:
Insert picture description here

Configuration

In the home page above, we can see that Grafana has been installed, and then click Add data source to enter the add data source interface.

data source

The data source configured in our place is Prometheus, so just select this Type. Change the data source and add a name: prometheus-ds. The most important thing is that the following HTTP area is to configure the access mode of the data source.

The access mode is used to control how to process requests to the data source:

  • Server (Server) access mode (default): All requests will be sent from the browser to the Grafana back-end server, which forwards the request to the data source. In this way, some cross-domain problems can be avoided. In fact, it is after Grafana. The end has done a forwarding and needs to access the URL from the Grafana backend server.
  • Browser access mode: All requests will be sent directly from the browser to the data source, but there may be some cross-domain restrictions. To use this access mode, you need to access the URL directly from the browser.

Since our place, Prometheus, exposes services through NodePort, can we use browser access mode to directly access Prometheus's external network address , but this method is obviously not the best, which is equivalent to going outside Internet, and our Prometheus and Grafana are under the same namespace of kube-ops, can they be accessed directly in the form of DNS inside the cluster, and they are all intranet traffic , so we use server access here The mode is obviously better, the data source address: http://prometheus:9090 (because it is under the same namespace, so the Service name can be used directly), and then other configuration information is based on the actual situation, such as Auth authentication, we don’t have it here , So just skip it and click on the Save & Test prompt at the bottom to successfully prove that our data source configuration is correct:
Insert picture description here

Add Dashboard

After adding the data source, you can add Dashboard.
Insert picture description here
Add k8s dashboard:

There are many public Dashboards on the official website of grafana for us to use. Here we can use the Kubernetes cluster monitoring (via Prometheus) (dashboard id 162) Dashboard to display the monitoring information of the Kubernetes cluster. Create on the left sidebar Click import to import: It
Insert picture description here
Insert picture description hereshould be noted that before executing the above import, remember to select our data source named prometheus , execute the import operation, you can enter the dashboard page:
Insert picture description here
we can see that there are many beautiful on the dashboard page The chart, but the data seems to be abnormal. This is caused by the inconsistency of the data indicator names required in this dashboard and the data indicators collected in Prometheus. For example, the first Cluster memory usage (cluster memory usage), we can Click the title -> Edit to enter the edit page for editing this chart:
Insert picture description here
enter the edit page and we can see the query statement of this chart:

(sum(node_memory_MemTotal) - sum(node_memory_MemFree+node_memory_Buffers+node_memory_Cached) ) / sum(node_memory_MemTotal) * 100

Insert picture description here
This is the promQL statement we previously queried in Prometheus. We can copy the above query statement to the Prometheus Graph page for query. In fact, it can be expected that there is no corresponding data, because the data indicator we collected with node_exporter is not node_memory_MemTotal Keyword, but node_memory_MemTotal_bytes, change the promQL statement above accordingly:

(sum(node_memory_MemTotal_bytes) - sum(node_memory_MemFree_bytes + node_memory_Buffers_bytes+node_memory_Cached_bytes)) / sum(node_memory_MemTotal_bytes) * 100

The meaning of this statement is (the memory of the entire cluster-(the remaining memory of the entire cluster and Buffer and Cached))/the memory of the entire cluster, which is simply the percentage of total cluster memory usage. Replace the promQL statement of grafana above, and you can see that the chart is normal:
Insert picture description hereSimilarly, we can change the usage of the following CPU and FileSystem:

CPU usage:

sum(sum by (container_name)( rate(container_cpu_usage_seconds_total{
    
    image!=''}[1m] ) )) / count(node_cpu_seconds_total{
    
    mode="system"}) * 100

FileSystem usage rate

(sum(node_filesystem_size_bytes{
    
    device="rootfs"}) - sum(node_filesystem_free_bytes{
    
    device="rootfs"}) ) / sum(node_filesystem_size_bytes{
    
    device="rootfs"}) * 100

Insert picture description hereSimilarly, the following Pod CPU Usage is used to show the usage of Pod CPU. The corresponding promQL statement is as follows, and statistics are based on the pod name:

sum by (pod) (rate(container_cpu_usage_seconds_total{
    
    image!='',pod!=''}[1m]))

Insert picture description here
Pod memory usage:

sort_desc(sum (container_memory_usage_bytes{
    
    image!="", pod!=""}) by(pod))

Insert picture description hereunder network i / o :

A:
sort_desc(sum by (pod) (rate (container_network_receive_bytes_total{
    
    name!=""}[1m]) ))
B:
sort_desc(sum by (pod) (rate (container_network_transmit_bytes_total{
    
    name!=""}[1m]) ))

Insert picture description here

Finally, remember to save this dashboard
Insert picture description here

Add Kubernetes data source (plug-in method)

We can install some other plugins. For example, grafana has a plugin specifically for Kubernetes cluster monitoring: grafana-kubernetes-app

To install this plug-in, you need to execute the installation command in the Pod of grafana:

[root@k8s-master ~]# kubectl exec -it grafana-556d7f8c75-2slqv /bin/bash -n kube-ops
grafana@grafana-556d7f8c75-2slqv:/usr/share/grafana$ grafana-cli plugins install grafana-kubernetes-app
installing grafana-kubernetes-app @ 1.0.1
from url: https://grafana.com/api/plugins/grafana-kubernetes-app/versions/1.0.1/download
into: /var/lib/grafana/plugins

✔ Installed grafana-kubernetes-app successfully 

Restart grafana after installing plugins . <service grafana-server restart>

After the installation is complete, you need to restart grafana for it to take effect. Let's delete the Pod directly, rebuild it, and then go back to the grafana page, switch to the plugins page, and you can find that there is an additional Kubernetes plugin below, click to enable it, and then click Next next up links to configure a cluster
Insert picture description hereInsert picture description herewhere we can add a new Kubernetes cluster, where the need to fill in the address to access the cluster: https: //kubernetes.default, then the more important is the certificate cluster access, check the TLS Client Auth and With CA Cert these two items.

Insert picture description hereBriefly explain

Name Cluster name (custom)
URL Kubernetes Apiserver address
Because apiserver uses port 443, you also need to open https and get the Key
Datasource. Select the data source (created the prometheus data source before)

Next we need to view the api-server information

[root@abcdocker ~]# cat /root/.kube/config

[root@k8s-master ~]# cat /root/.kube/config 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01ESXdPVEE1TkRFd01Wb1hEVE13TURJd05qQTVOREV3TVZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTGNZCjhweDdhZmlIMGdDQkdMM2xmMHEremI2dUxEdmdNNytxcCswS0RwVExZN0RjWlhIUkVlcTE0dHBaTEtUOHRoQmcKd1piS2ppSWVWTUpaRmZkdkZzQ2ZXaExxd3k3K2wxYTVKRGtxTjBhei8ySU1zS1BOZm0yN3VIVmpMZHkvK2ZRWgpUZm15akVJRXk2MGtpdHhCRUxuNm9ab2V1aUFBcVoyd3RJcm0xZzJQUmRQVmQvZHYxZWd3NjRzZFo0QXBPcnd6ClN2RkVXcXVXYm1LV1dnMHZrb09VOGwwS0o3YUpGUS94VFE3NmVuUkxjMEMxT3JlWUJFRVpjMkxHVEVkcFF4dUgKYmRvcjM1Smd5MTQzTDNhQUpxSFRJYVdYdGFFVmdrb2REMmxSdGRVWGJMQ1dzZHpWOXkwVENIVnpNZjRPWktIbwpIZXNrRzZaMDExWXFvRHJiWTRFQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFCdys2MFpCTWViZ2wwVkRXOUJmcjJXWGI4YUQKUHZ1N1poWGhqTGVHR09adnNhMUdKQ2lBRVp2a1V2RzFjS0k5VWROU294RnlrK0lxS2Z0TjJMekFNRG1SYVJaWQo0aVVkZ1I3dGZLdGZvdTdHRzZQVkorbzhXTlpCREc3d3QxVmtNODlzOVlsTzMrZTc3NFFqaVB6aGM2N29xekFRCmJqMTRhcktLaG1Ic3phUGlXL1IwZXZDQmtrcks0aDVXbXpjeGQvczRuY210YVVtR3ZwQmh4cDMyZWxqRU9WaFQKSTBLZnY5K0RMM09GdlhPWXR4RU9sRmh3dTg1MEdQWVZXSWRVYm5ETXQzZDNXeG1GSjl3RFNET0ZqbVZYcjJrZApBejRSSWp0YlNGQnpzT0ZzNWRPTTR5Rmg1MU0rWVdFNjMzYi9INFdPMkZaNVR3QXc5RVZSMmJESmhBUT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://192.168.10.130:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {
    
    }
users:
- name: kubernetes-admin
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4akNDQWRxZ0F3SUJBZ0lJVWpQYmhvaWd6bUl3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TURBeU1Ea3dPVFF4TURGYUZ3MHlNVEF5TURnd09UUXhNRFZhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXVyME54N2JhVkpwSzB4RnMKQXN2eGJlb0JldkdCeW1hb2ZxdnFhTjRIREhoRENaU0FKeFY0bzVHaktlNENGMWRFR2xYNks4L0d5WXBLcXY4QQo3UUZjK0lvK01JazEyZit1Q2RtK05RemJQaDF1OUMvRzluZTNWYzJvNnk4UHk3OVh1bUxWVGhLQU1LVWNmcFQ3ClNZbXFBUUUzbS9aVjhoSzVTRzIydXh3VklVUDFsWnVPVmxBVUpJYzNQVmNDZjdwcVpNMzltWUdhQ2V4RFhRS20KaklqZWtSbkFWUXgwOVZ0Q2NoeE1uaEVlQlpETHgrWlQzT2dwTjhWYzJyTDUzOHVRanFjNHhhSFBKNHdKVTFPUwpVSk9KZXRDTDFYaUV3RnRYM2Zicm1tYzdhbGVCU3dmYy85Z3d3YjBnbVVQQnhlLzdzcStubUlsVUQwckV3bmR6Ck1HTWx2UUlEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFGRVgxUnQ4aUxrelIxbisrMExSZklLVGM3RUdXQWFCeGM4bgpQMmdLWjVxSGEyK3NLV2F2clR3MElSTG1HYW0wZ3FNa3FySG9qRUJ4Q3RkNXFOY250UjJvM1pPY3FWOG96ZW9SCjZteE40NGo0Q1VTenBoSXdwNi92eHMxNlBZRlk2M1B1YnRNNUV6ZExyQUJJNDRMdElQMFZjYTRwUWVUYTI3WTQKMUNobWltaVc3L29iQXlOcUNjRWtJTCtzRGxvUXpWK09IZ2JMaWx3MWlBblVnS2YvS1JMbFJyc2dOV0pDaklhYwp6dUFFT2QxVnZvYkF6QWRZelFTWFZzc1ZmRnpCcnI0S0NtK2gvb21KYUlLb0paTlpUWGY4Mmx0OW9WQnk3eUpUCkNYQWV0dUJyaUpnRk5nbS9FczVYT1h3TWxTb2htQTZmN1ZPL01JQlVyeVd4dnozRHppbz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBdXIwTng3YmFWSnBLMHhGc0FzdnhiZW9CZXZHQnltYW9mcXZxYU40SERIaERDWlNBCkp4VjRvNUdqS2U0Q0YxZEVHbFg2SzgvR3lZcEtxdjhBN1FGYytJbytNSWsxMmYrdUNkbStOUXpiUGgxdTlDL0cKOW5lM1ZjMm82eThQeTc5WHVtTFZUaEtBTUtVY2ZwVDdTWW1xQVFFM20vWlY4aEs1U0cyMnV4d1ZJVVAxbFp1TwpWbEFVSkljM1BWY0NmN3BxWk0zOW1ZR2FDZXhEWFFLbWpJamVrUm5BVlF4MDlWdENjaHhNbmhFZUJaREx4K1pUCjNPZ3BOOFZjMnJMNTM4dVFqcWM0eGFIUEo0d0pVMU9TVUpPSmV0Q0wxWGlFd0Z0WDNmYnJtbWM3YWxlQlN3ZmMKLzlnd3diMGdtVVBCeGUvN3NxK25tSWxVRDByRXduZHpNR01sdlFJREFRQUJBb0lCQUFVTEdoWXN3QlRNM2Z4NQpXZnR4V3FIblVnYnFBdUZlaUdwelppOVMzOG5jYmFNU21hdDBqditMN1dZeWdXZnorV2prclk4Rlc0OFI1eFpiCk1NRTE2amJrTk8zR3B1ZXVXaHIyQUljYVE4bVhyZWwyYU44N09INWV3Wk1vZ0RxMmZqNFFjVVpjaFkzS3g4dzcKWmRZRW04elBKWnRXdWRlQjNmTXcwMkNXVDVQSVlFVlpybVdJcUYwampYOGhVRmpZcEFjdENzMkJiRStxYnhCaApWZlJlSXpwMGlLTTk1V2E3WVpVdTNBUXgwcHRuWGVjeUNZbUVob0wyeGVMODc1bTVabkpGTzBnYm9zbmZGY0k3CnJiZjZHUkZKSVZJRjg0WU9YbmZRbFpqNnQ1Vlg1V2RkbFdOTjB5MlE3TFFwR3VlVEo1QWRnTlpEUmZpdkE2ODUKRTIvdlk1a0NnWUVBNUU1dTBBSVduemJ3Ym1NV2wrQ3laVGVTbEpjNU5QdDRVZWFzMFBNU0ZrZ3NlT1cyaDFLRApSYXlNRmk4ZENsVjB1bnFwNkt6ZEd2MUFObURTRUxpeEMxQmg2bkVkempORUtQakR4aVRQUERtOWRDNzJsZG4wCko2RjJ2R3pKbXM1dHJHV2llYkJWTWtKbms4d2NvcUszMC85a1BaRDd2U3RoYnZBQUk3T2RXK2NDZ1lFQTBXUFIKcVZOK1VSYXRRV0Nray9hSk1nb1RmcC9tWERpVDhqWlppK0V3UW83MnFxOTV0RVZNT2dPeEEvTGh2THNiQXNIUgprdUdrcHI3UlpMUG01blF4V1FpRGtCYkU5WFJ4bFloWTJCU203MVVCWEFvZXg5a2N4WGlFQjV5bDcwZlArcHQ5Cms4ZGhxS3h4MitjMjRJN1JUUjhKVXV3T2tXYUVWQTFkd25nR1hMc0NnWUI2NVRXRlJ2cUNiZ0p5aVdoS0RTdzYKaS9XZGd1SEtnV3M5T3h6ZnhWaUJJZ3krYjNrWDB2VFM5cFRhQkRadnI1eU1IU2VGRmpoWEpPZ0IzWkIyYTlUeApzQzFsRThybGluY3dUdWlqcW9EYmZJRmRIMEtoVzVld0ZaeGl4WFNvbm1JdklPNmE3cTZOeFcwWUJCR09BbVZOCit2WXNwZlM4MmJNekVvSWd0YmtKRlFLQmdRQzhQR1FyT0tnQjljVGpWU0llOGk3OEVScmRacG9NcGNBNnFxbHQKbW85c0JtR3hwL1pkSFQ4ZG1GdjJGTTdpZjhJVWhIRUcvbHFxbkRoWnMzRU1FOENaTFpJNFluL0Z1Vnl5OU5RSgp6T2NWbVBHVDhIVWpiQWIxYnhZaVVhektvMkJSQnArcHpqLzVCcTJFNXlMcVZQbkx2dTcrNEw5bjd5VmUrblVqCmNnc21LUUtCZ0hFYWpSUGRZamZjbHNQUkZCSUdRS1lUa3NPM01uMEpHRm4vYzZXcUo4ZGFFNHZGM1YvRlQ4aHoKais5T1Q2SEpXZzhOaHJjSnVOQm9pWHpzV3NGTm10WFpqMmlXUVBhcm52SU9YS2pQdnNOQ2ZjTzdXRGxFbCsvKwpBS24zUC9XY2ZuZy8veVZrbjNVTy9yQTMvVzJMY1pBeFdUNitJd2lPekFYcnNQTEx6UWc5Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==

certificate-authority-data = CA Cert corresponding

server = 192.168.10.130:6443 (here is the apiserver address)

client-certificate-data = Client Cert

client-key-data = Client Key

Need to explain here, the config file is compiled with base64, so when we fill in, we need to use base64 to decode

echo "LS0tLS1CS0tLQo=............."|base64 -d

Import the decoded information:

-----BEGIN CERTIFICATE-----
MIICyDCCAbCgAwIBAgIBADANBgkqhkiG9w0BAQsFADAVMRMwEQYDVQQDEwprdWJl
cm5ldGVzMB4XDTIwMDIwOTA5NDEwMVoXDTMwMDIwNjA5NDEwMVowFTETMBEGA1UE
AxMKa3ViZXJuZXRlczCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBALcY
8px7afiH0gCBGL3lf0q+zb6uLDvgM7+qp+0KDpTLY7DcZXHREeq14tpZLKT8thBg
wZbKjiIeVMJZFfdvFsCfWhLqwy7+l1a5JDkqN0az/2IMsKPNfm27uHVjLdy/+fQZ
TfmyjEIEy60kitxBELn6oZoeuiAAqZ2wtIrm1g2PRdPVd/dv1egw64sdZ4ApOrwz
SvFEWquWbmKWWg0vkoOU8l0KJ7aJFQ/xTQ76enRLc0C1OreYBEEZc2LGTEdpQxuH
bdor35Jgy143L3aAJqHTIaWXtaEVgkodD2lRtdUXbLCWsdzV9y0TCHVzMf4OZKHo
HeskG6Z011YqoDrbY4ECAwEAAaMjMCEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB
/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBABw+60ZBMebgl0VDW9Bfr2WXb8aD
Pvu7ZhXhjLeGGOZvsa1GJCiAEZvkUvG1cKI9UdNSoxFyk+IqKftN2LzAMDmRaRZY
4iUdgR7tfKtfou7GG6PVJ+o8WNZBDG7wt1VkM89s9YlO3+e774QjiPzhc67oqzAQ
bj14arKKhmHszaPiW/R0evCBkkrK4h5Wmzcxd/s4ncmtaUmGvpBhxp32eljEOVhT
I0Kfv9+DL3OFvXOYtxEOlFhwu850GPYVWIdUbnDMt3d3WxmFJ9wDSDOFjmVXr2kd
Az4RIjtbSFBzsOFs5dOM4yFh51M+YWE633b/H4WO2FZ5TwAw9EVR2bDJhAQ=
-----END CERTIFICATE-----

My environment configuration is as follows:
Insert picture description hereInsert picture description here
Then we click Save, the
Insert picture description hereeffect chart: If
Insert picture description herethere is no picture, the query statement needs to be adjusted:
cluster disk usage:

(sum (node_filesystem_size_bytes{
    
    instance=~"$node"}) - sum (node_filesystem_free_bytes{
    
    instance=~"$node"})) / sum (node_filesystem_size_bytes{
    
    instance=~"$node"})

Set grafana alarm

Guess you like

Origin blog.csdn.net/zhangshaohuas/article/details/108584129