[prometheus]-07 Kube-state-metrics cluster resource monitoring of Kubernetes cloud native monitoring

[prometheus]-06 cAdvisor container resource monitoring for Kubernetes cloud native monitoring

2021-09-01

[prometheus]-05 Node performance monitoring of Kubernetes cloud native monitoring

2021-08-30

[prometheus]-04 Easy to handle Prometheus Eureka service discovery

2021-08-25

[prometheus]-03 Easy to handle Prometheus file service discovery

2021-08-23

[prometheus]-02 A picture to thoroughly understand the Prometheus service discovery mechanism

2021-08-18

【prometheus】- 01 Introduction to the monitoring system in the cloud native era

2021-08-16

Kube-state-metrics cluster resource monitoring for Kubernetes cloud native monitoring

overview

KubernetesCloud-native cluster monitoring mainly involves the following three types of indicators: nodephysical node indicators, pod & containercontainer resource indicators, and Kubernetescloud-native cluster resource indicators. There are relatively mature solutions for these three types of indicators, as shown in the figure below:

In the previous section, we sorted out cAdvisorhow to monitor container performance indicators. In this section, we will analyze the monitoring of cloud-native cluster resources.

KubernetesDuring the running of the cluster, we want to know the running status of the service, which is needed at this time kube-state-metrics. It mainly focuses on deployment、service 、 podthe status of the cluster resource objects.

Kube State Metricsis a simple service that listens to the Kubernetes APIserver to generate Metricsdata about the status of different resources.

cAdvisorHas been integrated Kubernetesby default , Kube State Metricsbut has not been integrated by default, so if we want to monitor the complete data of the cluster, we need to deploy the component separately Kubernetesin Kube State Metrics, so that the service resource indicator data in the cluster can be exposed, so as to monitor different resources .

environmental information

KubernetesThe cluster environment I built is as shown in the figure below, and the follow-up is based on the cluster demonstration:

kube-state-metrics installation

1. Select a version Kubernetescompatible with the version kube-state-metrics, https://github.com/kubernetes/kube-state-metrics

[root@master kube-state-metrics]# kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.5", GitCommit:"e338cf2c6d297aa603b50ad3a301f761b4173aa6", GitTreeState:"clean", BuildDate:"2020-12-09T11:18:51Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.5", GitCommit:"e338cf2c6d297aa603b50ad3a301f761b4173aa6", GitTreeState:"clean", BuildDate:"2020-12-09T11:10:32Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}

My k8sversion is v1.19.5, so select kube-state-metricsthe version here v2.1.1.

2. Create a directory on the master node kube-state-metrics, and copy the files in the directory kube-state-metrics-2.1.1.zipin the decompressed package to the directory:examples/standardkube-state-metrics

[root@master kube-state-metrics]# ls -lah
总用量 20K
drwxr-xr-x. 2 root root  135 7月  21 13:40 .
drwxr-xr-x. 5 root root   74 7月  21 13:39 ..
-rw-r--r--. 1 root root  376 7月  29 2021 cluster-role-binding.yaml
-rw-r--r--. 1 root root 1.6K 7月  29 2021 cluster-role.yaml
-rw-r--r--. 1 root root 1.2K 7月  29 2021 deployment.yaml
-rw-r--r--. 1 root root  192 7月  29 2021 service-account.yaml
-rw-r--r--. 1 root root  405 7月  29 2021 service.yaml

Since Kube State Metricsthe component needs to kube-apiserverconnect with and call the corresponding interface to obtain kubernetesthe cluster data, this process requires Kube State Metricsthe component to have certain permissions to successfully perform these operations. KubernetesBy default, the method is used to manage permissions in RBAC. Therefore, we need to create corresponding RBAC resources to provide this component.

The deployment.yaml file pays attention to the following two port functions exposed:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: kube-state-metrics
  labels:
    k8s-app: kube-state-metrics
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: kube-state-metrics
  template:
    metadata:
      labels:
        k8s-app: kube-state-metrics
    spec:
      serviceAccountName: kube-state-metrics
      containers:
      - name: kube-state-metrics
        image: bitnami/kube-state-metrics:2.0.0
        securityContext:
          runAsUser: 65534
        ports:
        - name: http-metrics    ##用于公开kubernetes的指标数据的端口
          containerPort: 8080
        - name: telemetry       ##用于公开自身kube-state-metrics的指标数据的端口
          containerPort: 8081

3. Create

[root@master kube-state-metrics]# kubectl  apply -f  ./
clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics created
clusterrole.rbac.authorization.k8s.io/kube-state-metrics created
deployment.apps/kube-state-metrics created
serviceaccount/kube-state-metrics created
service/kube-state-metrics created

4. View

#查看是否运行成功
[root@master kube-state-metrics]# kubectl get pod -n kube-system -owide |grep kube-state-metrics 
kube-state-metrics-5f84848c58-v7v9z        1/1     Running   0          50m    10.100.166.135   node1    <none>           <none>
[root@master kube-state-metrics]# kubectl get svc -n kube-system |grep kube-state-metrics
kube-state-metrics   ClusterIP   None           <none>        8080/TCP,8081/TCP        50m

Special attention is paid to this mirror image, because the prefix is k8s.gcr.io, so it will pullnot work, change it bitnami/kube-state-metrics:2.1.1, and then use docker tagit to change its namek8s.gcr.io/kube-state-metrics/kube-state-metrics:v2.1.1

[root@node1 ~]# docker pull bitnami/kube-state-metrics:2.1.1
[root@node1 ~]# docker tag f0db7c5a6de8 k8s.gcr.io/kube-state-metrics/kube-state-metrics:v2.1.1
[root@node1 ~]# docker images
REPOSITORY                                         TAG                 IMAGE ID            CREATED                  SIZE
bitnami/kube-state-metrics                         2.1.1               f0db7c5a6de8        Less than a second ago   121MB
k8s.gcr.io/kube-state-metrics/kube-state-metrics   v2.1.1              f0db7c5a6de8        Less than a second ago   121MB
registry.aliyuncs.com/k8sxio/kube-proxy            v1.19.5             6e5666d85a31        7 months ago             118MB

5. Verify whether the indicator is collected successfully. Requested kube-state-metricsport pod ip+8080:

[root@master kube-state-metrics]# curl 10.100.166.135:8080/metrics

Prometheus access

1. The svc created by kube-state-metrics is ClusterIPa type that can only be accessed within the cluster, so prometheusit needs to be deployed on the cluster nodes, otherwise the IP may not be accessible:

- job_name: 'kube-state-metrics'
   metrics_path: metrics
   kubernetes_sd_configs:
   - role: pod
  api_server: https://apiserver.simon:6443
     bearer_token_file: /tools/token.k8s 
     tls_config:
       insecure_skip_verify: true
   bearer_token_file: /tools/token.k8s
   tls_config:
     insecure_skip_verify: true
   relabel_configs:
   - source_labels: [__meta_kubernetes_pod_name]
     action: replace
     target_label: pod
   - action: labelmap
     regex: __meta_kubernetes_pod_label_(.+)
   - source_labels: [__meta_kubernetes_pod_ip]
     regex: (.+)
     target_label: __address__
     replacement: ${1}:8080
   - source_labels:  ["__meta_kubernetes_pod_container_name"]
     regex: "^kube-state-metrics.*"
     action: keep

Crawl metrics list:

kube_limitrange{}
kube_replicaset_created{}
kube_persistentvolumeclaim_status_phase{}
kube_pod_container_status_terminated{}
kube_secret_info{}
kube_service_info{}
kube_daemonset_status_observed_generation{}
kube_node_role{}
kube_persistentvolume_claim_ref{}
kube_pod_start_time{}
kube_configmap_info{}
kube_daemonset_created{}
kube_endpoint_address_not_ready{}
kube_node_created{}
kube_pod_init_container_status_waiting{}
kube_secret_metadata_resource_version{}
kube_pod_container_resource_requests{}
kube_pod_status_ready{}
kube_secret_created{}
kube_persistentvolume_capacity_bytes{}
kube_persistentvolumeclaim_info{}
kube_pod_status_reason{}
kube_secret_type{}
kube_deployment_spec_strategy_rollingupdate_max_unavailable{}
kube_deployment_status_condition{}
kube_pod_container_status_ready{}
kube_pod_created{}
kube_deployment_spec_replicas{}
kube_ingress_metadata_resource_version{}
kube_ingress_tls{}
kube_persistentvolumeclaim_resource_requests_storage_bytes{}
kube_deployment_status_replicas{}
kube_limitrange_created{}
kube_namespace_status_phase{}
kube_node_info{}
kube_endpoint_address_available{}
kube_ingress_labels{}
kube_pod_init_container_status_restarts_total{}
kube_daemonset_status_number_unavailable{}
kube_endpoint_created{}
kube_pod_status_phase{}
kube_deployment_spec_strategy_rollingupdate_max_surge{}
kube_deployment_status_replicas_available{}
kube_node_spec_unschedulable{}
kube_deployment_metadata_generation{}
kube_lease_renew_time{}
kube_node_status_capacity{}
kube_persistentvolumeclaim_access_mode{}
kube_daemonset_status_updated_number_scheduled{}
kube_namespace_created{}
kube_persistentvolume_status_phase{}
kube_pod_container_status_running{}
kube_daemonset_metadata_generation{}
kube_node_status_allocatable{}
kube_pod_container_resource_limits{}
kube_pod_init_container_status_terminated_reason{}
kube_configmap_created{}
kube_ingress_path{}
kube_pod_restart_policy{}
kube_replicaset_status_ready_replicas{}
kube_namespace_labels{}
kube_pod_status_scheduled_time{}
kube_configmap_metadata_resource_version{}
kube_pod_info{}
kube_pod_spec_volumes_persistentvolumeclaims_info{}
kube_replicaset_owner{}
kube_pod_owner{}
kube_pod_status_scheduled{}
kube_daemonset_labels{}
kube_deployment_created{}
kube_deployment_spec_paused{}
kube_persistentvolume_info{}
kube_pod_container_status_restarts_total{}
kube_pod_init_container_status_ready{}
kube_service_created{}
kube_persistentvolume_labels{}
kube_daemonset_status_number_available{}
kube_node_spec_taint{}
kube_pod_completion_time{}
kube_pod_container_info{}
kube_pod_init_container_status_running{}
kube_replicaset_labels{}
kube_daemonset_status_number_ready{}
kube_deployment_status_observed_generation{}
kube_ingress_info{}
kube_node_labels{}
kube_pod_container_status_terminated_reason{}
kube_pod_init_container_info{}
kube_daemonset_status_number_misscheduled{}
kube_deployment_status_replicas_updated{}
kube_endpoint_info{}
kube_endpoint_labels{}
kube_secret_labels{}
kube_deployment_status_replicas_unavailable{}
kube_lease_owner{}
kube_pod_container_status_waiting{}
kube_daemonset_status_current_number_scheduled{}
kube_ingress_created{}
kube_replicaset_metadata_generation{}
kube_deployment_labels{}
kube_node_status_condition{}
kube_pod_container_status_last_terminated_reason{}
kube_pod_init_container_status_terminated{}
kube_service_spec_type{}
kube_persistentvolumeclaim_labels{}
kube_pod_container_state_started{}
kube_pod_labels{}
kube_replicaset_status_observed_generation{}
kube_service_labels{}
kube_daemonset_status_desired_number_scheduled{}
kube_pod_spec_volumes_persistentvolumeclaims_readonly{}
kube_replicaset_status_replicas{}
kube_replicaset_spec_replicas{}
kube_replicaset_status_fully_labeled_replicas{}

2. Check whether the connection is successful in the Prometheus target:

dashboard configuration

Import 14518 dashboard, kube-state-metricsand the performance monitoring indicators are displayed on the template, as shown in the figure below:


Guess you like

Origin blog.csdn.net/god_86/article/details/120137000