Kubernetes Series kubernetes Prometheus Operator

Kubernetes Series kubernetes Prometheus Operator

Operator is CoreOS developed to extend the application-specific controller Kubernetes API is used to create, configure and manage complex stateful applications, such as Mysql, caching and monitoring system. Currently CoreOS official offers several Operator's code implementation, including Prometheus Operator

Chart below shows Prometheus Operator

Operator as a core controller, it creates Prometheus, ServiceMonitor, alertmanager and our prometheus-rule four resource object, operator constantly monitors and maintains four state resource objects, where the object is to create a resource Prometheus Prometheus Server monitoring, and various abstract ServiceMonitor that we use the exporter's (exporter has been described earlier in this article, is to provide metrics tool for our various services) metrics data ServiceMonitor Prometheus is through the interface provided by the data we pull over. Now we do not need to monitor prometheus modify rules created for each service separately. To monitor the cluster through direct management Operator. Here also say, as a matching service ServiceMonitor can go through our internal cluster label label, and our prometheus can also match more ServiceMonitor by label

prometheus-operator.png-147.6kB

Which, Operator is the core part, as a controller exists, Operator creates Prometheus, ServiceMonitor, AlertManager and PrometheusRule four CRD resource object, and then continuously monitors and maintains four CRD resource object status

  • Prometheus Prometheus Service as a resource object is present
  • ServiceMonitor resource object is designed to provide metrics data interface abstraction exporter of, Prometheus is the metrics data provided by ServiceMonitor interface to pull data
  • AlerManager resource object component corresponding alertmanager
  • PrometheusRule resource object is alerting rules file is used by the example of Prometheus

CRD Profile
full name CustomResourceDefinition, in Kubernetes everything can be seen as resources, increase resources for CRD custom secondary development of open expansion Kubernetes API after Kubernetes1.7, when we create a new CRD, Kubernetes API server for you each version of the development of RESTful resources to create a new path, we can create our own definition of some types of resources according to the API path. CRD may be a namespace, it can be cluster-wide. The scope of the CRD scpoe field in the making, as with the existing built-in objects, remove the name space will delete all custom objects in the name of

Simply put CRD is an extension of Kubernetes API's, Kubernetes each resource is a set of API objects, such as yaml defined in the file spec, is the definition of Kubernetes resource objects, all custom resources Like Kubernetes built-in resource use Kubectl

Thus, the monitoring data in the cluster, it becomes Kubernetes resource object directly to the monitor, and ServiceMonitor are Kubernetes Service resource object, a match may be a Class Service ServiceMonitor by LabelSelector, Prometheus may be matched by a plurality ServiceMonitor LabelSelector, and Prometheus and AlertManager are automatically senses the alarm monitoring configuration changes, you do not need to be considered reload operation.


installation

Operator is native support for Prometheus, and can be monitored by the cluster service discovery, and a universal mounting. Yaml file that is provided by operator, is basically in the Prometheus can be used directly, you may need to change where only a few

#官方下载 (使用官方下载的出现镜像版本不相同请自己找镜像版本)
wget -P /root/ https://github.com/coreos/kube-prometheus/archive/master.zip
unzip master.zip
cd /root/kube-prometheus-master/manifests

prometheus-serviceMonitorKubelet.yaml (This file is used to collect metrics data of our service)

no need for correction


cat prometheus-serviceMonitorKubelet.yaml

APIVERSION: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
k8s app: kubelet
name: kubelet
namespace: monitoring
spec:
Endpoints:

  • bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
    honorLabels: true
    interval: 30s
    port: https-metrics
    scheme: https
    tlsConfig:
    insecureSkipVerify: true
  • bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
    honorLabels: true
    interval: 30s
    metricRelabelings:
    • action: drop
      regex: container_(network_tcp_usage_total|network_udp_usage_total|tasks_state|cpu_load_average_10s)
      sourceLabels:
      • name
        path: / metrics / cadvisor
        Port: HTTPS-metrics
        scheme: HTTPS
        tlsConfig:
        insecureSkipVerify: to true
        jobLabel: K8S-App
        namespaceSelector: # matching namespace, this means that representatives will go to match kube-system namespace, with k8s- app = kubelet label, the label will match into our prometheus monitoring
        matchNames:
    • System-Kube
      Selector: # three lines are used to match our Service
      matchLabels:
      K8S-App: kubelet
      这里修改完毕后,我们就可以直接创建配置文件

      [root@HUOBAN-K8S-MASTER01 manifests]# kubectl apply -f ./
      namespace/monitoring unchanged
      customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com unchanged
      customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com unchanged
      customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com unchanged
      customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com unchanged
      customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com unchanged
      clusterrole.rbac.authorization.k8s.io/prometheus-operator unchanged
      clusterrolebinding.rbac.authorization.k8s.io/prometheus-operator unchanged
      deployment.apps/prometheus-operator unchanged
      service/prometheus-operator unchanged
      serviceaccount/prometheus-operator unchanged
      servicemonitor.monitoring.coreos.com/prometheus-operator created
      alertmanager.monitoring.coreos.com/main created
      secret/alertmanager-main unchanged
      service/alertmanager-main unchanged
      serviceaccount/alertmanager-main unchanged
      servicemonitor.monitoring.coreos.com/alertmanager created
      secret/grafana-datasources unchanged
      configmap/grafana-dashboard-apiserver unchanged
      configmap/grafana-dashboard-controller-manager unchanged
      configmap/grafana-dashboard-k8s-resources-cluster unchanged
      configmap/grafana-dashboard-k8s-resources-namespace unchanged
      configmap/grafana-dashboard-k8s-resources-node unchanged
      configmap/grafana-dashboard-k8s-resources-pod unchanged
      configmap/grafana-dashboard-k8s-resources-workload unchanged
      configmap/grafana-dashboard-k8s-resources-workloads-namespace unchanged
      configmap/grafana-dashboard-kubelet unchanged
      configmap/grafana-dashboard-node-cluster-rsrc-use unchanged
      configmap/grafana-dashboard-node-rsrc-use unchanged
      configmap/grafana-dashboard-nodes unchanged
      configmap/grafana-dashboard-persistentvolumesusage unchanged
      configmap/grafana-dashboard-pods unchanged
      configmap/grafana-dashboard-prometheus-remote-write unchanged
      configmap/grafana-dashboard-prometheus unchanged
      configmap/grafana-dashboard-proxy unchanged
      configmap/grafana-dashboard-scheduler unchanged
      configmap/grafana-dashboard-statefulset unchanged
      configmap/grafana-dashboards unchanged
      deployment.apps/grafana configured
      service/grafana unchanged
      serviceaccount/grafana unchanged
      servicemonitor.monitoring.coreos.com/grafana created
      clusterrole.rbac.authorization.k8s.io/kube-state-metrics unchanged
      clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics unchanged
      deployment.apps/kube-state-metrics unchanged
      role.rbac.authorization.k8s.io/kube-state-metrics unchanged
      rolebinding.rbac.authorization.k8s.io/kube-state-metrics unchanged
      service/kube-state-metrics unchanged
      serviceaccount/kube-state-metrics unchanged
      servicemonitor.monitoring.coreos.com/kube-state-metrics created
      clusterrole.rbac.authorization.k8s.io/node-exporter unchanged
      clusterrolebinding.rbac.authorization.k8s.io/node-exporter unchanged
      daemonset.apps/node-exporter configured
      service/node-exporter unchanged
      serviceaccount/node-exporter unchanged
      servicemonitor.monitoring.coreos.com/node-exporter created
      apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io unchanged
      clusterrole.rbac.authorization.k8s.io/prometheus-adapter unchanged
      clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader unchanged
      clusterrolebinding.rbac.authorization.k8s.io/prometheus-adapter unchanged
      clusterrolebinding.rbac.authorization.k8s.io/resource-metrics:system:auth-delegator unchanged
      clusterrole.rbac.authorization.k8s.io/resource-metrics-server-resources unchanged
      configmap/adapter-config unchanged
      deployment.apps/prometheus-adapter configured
      rolebinding.rbac.authorization.k8s.io/resource-metrics-auth-reader unchanged
      service/prometheus-adapter unchanged
      serviceaccount/prometheus-adapter unchanged
      clusterrole.rbac.authorization.k8s.io/prometheus-k8s unchanged
      clusterrolebinding.rbac.authorization.k8s.io/prometheus-k8s unchanged
      prometheus.monitoring.coreos.com/k8s created
      rolebinding.rbac.authorization.k8s.io/prometheus-k8s-config unchanged
      rolebinding.rbac.authorization.k8s.io/prometheus-k8s unchanged
      rolebinding.rbac.authorization.k8s.io/prometheus-k8s unchanged
      rolebinding.rbac.authorization.k8s.io/prometheus-k8s unchanged
      role.rbac.authorization.k8s.io/prometheus-k8s-config unchanged
      role.rbac.authorization.k8s.io/prometheus-k8s unchanged
      role.rbac.authorization.k8s.io/prometheus-k8s unchanged
      role.rbac.authorization.k8s.io/prometheus-k8s unchanged
      prometheusrule.monitoring.coreos.com/prometheus-k8s-rules created
      service/prometheus-k8s unchanged
      serviceaccount/prometheus-k8s unchanged
      servicemonitor.monitoring.coreos.com/prometheus created
      servicemonitor.monitoring.coreos.com/kube-apiserver created
      servicemonitor.monitoring.coreos.com/coredns created
      servicemonitor.monitoring.coreos.com/kube-controller-manager created
      servicemonitor.monitoring.coreos.com/kube-scheduler created
      servicemonitor.monitoring.coreos.com/kubelet created

      当我们部署成功之后,我们可以查看一下crd,yaml文件会自动帮我们创建crd文件。只有我们创建了crd文件,我们的serviceMonitor才会有用

      [root@HUOBAN-K8S-MASTER01 manifests]# kubectl get crd
      NAME CREATED AT
      alertmanagers.monitoring.coreos.com 2019-10-18T08:32:57Z
      podmonitors.monitoring.coreos.com 2019-10-18T08:32:58Z
      prometheuses.monitoring.coreos.com 2019-10-18T08:32:58Z
      prometheusrules.monitoring.coreos.com 2019-10-18T08:32:58Z
      servicemonitors.monitoring.coreos.com 2019-10-18T08:32:59Z

      其他的资源文件都会部署在一个命名空间下面,在monitoring里面是operator Pod对应的列表

      [root@HUOBAN-K8S-MASTER01 manifests]# kubectl get pod -n monitoring
      NAME READY STATUS RESTARTS AGE
      alertmanager-main-0 2/2 Running 0 11m
      alertmanager-main-1 2/2 Running 0 11m
      alertmanager-main-2 2/2 Running 0 11m
      grafana-55488b566f-g2sm9 1/1 Running 0 11m
      kube-state-metrics-ff5cb7949-wq7pb 3/3 Running 0 11m
      node-exporter-6wb5v 2/2 Running 0 11m
      node-exporter-785rf 2/2 Running 0 11m
      node-exporter-7kvkp 2/2 Running 0 11m
      node-exporter-85bnh 2/2 Running 0 11m
      node-exporter-9vxwf 2/2 Running 0 11m
      node-exporter-bvf4r 2/2 Running 0 11m
      node-exporter-j6d2d 2/2 Running 0 11m
      prometheus-adapter-668748ddbd-d8k7f 1/1 Running 0 11m
      prometheus-k8s-0 3/3 Running 1 11m
      prometheus-k8s-1 3/3 Running 1 11m
      prometheus-operator-55b978b89-qpzfk 1/1 Running 0 11m

      其中prometheus和alertmanager采用的StatefulSet,其他的Pod则采用deployment创建

      [root@HUOBAN-K8S-MASTER01 manifests]# kubectl get deployments.apps -n monitoring
      NAME READY UP-TO-DATE AVAILABLE AGE
      grafana 1/1 1 1 12m
      kube-state-metrics 1/1 1 1 12m
      prometheus-adapter 1/1 1 1 12m
      prometheus-operator 1/1 1 1 12m
      [root@HUOBAN-K8S-MASTER01 manifests]# kubectl get statefulsets.apps -n monitoring
      NAME READY AGE
      alertmanager-main 3/3 11m
      prometheus-k8s 2/2 11m

# Prometheus-operator which is our core file, which is to monitor our prometheus and alertmanager file

现在创建完成后我们还无法直接访问prometheus

[root@HUOBAN-K8S-MASTER01 manifests]# kubectl get svc -n monitoring |egrep "prometheus|grafana|alertmanage"
alertmanager-main ClusterIP 10.96.226.38 <none> 9093/TCP 3m55s
alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP 3m10s
grafana ClusterIP 10.97.175.234 <none> 3000/TCP 3m53s
prometheus-adapter ClusterIP 10.96.43.155 <none> 443/TCP 3m53s
prometheus-k8s ClusterIP 10.105.75.186 <none> 9090/TCP 3m52s
prometheus-operated ClusterIP None <none> 9090/TCP 3m
prometheus-operator ClusterIP None <none> 8080/TCP 3m55s

由于默认的yaml文件svc采用的是ClusterIP,我们无法进行访问。这里我们可以使用ingress进行代理,或者使用node-port临时访问。我这里就修改一下svc,使用node-port进行访问

# I am here to be modified using the edit, or modify files under yaml apply to

Edit svc -n Monitoring Prometheus kubectl-K8S
# Note to modify svc is prometheus-k8s because this has clusterIP
kubectl Edit svc -n Monitoring grafana
kubectl Edit svc -n Monitoring alertmanager-main
# three files need to be modified, do not modify wrong . There are modified clusterIP of
...
of the type: NodePort # modify this line is NodePort

prometheus-k8s、grafana和alertmanager-main都是只修改type=clusterIP这行

![image.png](https://upload-images.jianshu.io/upload_images/6064401-efe1ddb97a7d8c65.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)

修改完毕后,我们在查看svc,就会发现这几个都包含node端口了,接下来在任意集群节点访问即可

[root@HUOBAN-K8S-MASTER01 manifests]# kubectl get svc -n monitoring |egrep "prometheus|grafana|alertmanage"
alertmanager-main NodePort 10.96.226.38 <none> 9093:32477/TCP 13m
alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP 12m
grafana NodePort 10.97.175.234 <none> 3000:32474/TCP 13m
prometheus-adapter ClusterIP 10.96.43.155 <none> 443/TCP 13m
prometheus-k8s NodePort 10.105.75.186 <none> 9090:32489/TCP 13m
prometheus-operated ClusterIP None <none> 9090/TCP 12m
prometheus-operator ClusterIP None <none> 8080/TCP 13m

接下来我们查看prometheus的Ui界面

[root@HUOBAN-K8S-MASTER01 manifests]# kubectl get svc -n monitoring |grep prometheus-k8s
prometheus-k8s NodePort 10.105.75.186 <none> 9090:32489/TCP 19m
[root@HUOBAN-K8S-MASTER01 manifests]# hostname -i
172.16.17.191

我们访问的集群172.16.17.191:32489

![image.png](https://upload-images.jianshu.io/upload_images/6064401-90275be27c1d9f95.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)

这里kube-controller-manager和kube-scheduler并管理的目标,其他的都有。这里的就是和官方yaml文件里面定义的有关系
![image.png](https://upload-images.jianshu.io/upload_images/6064401-80b0213e740d6398.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)

配置文件解释

vim Prometheus-serviceMonitorKubeScheduler.yaml

apiVersion: monitoring.coreos.com/v1 #kubectl get crd which included, without modification
kind: a ServiceMonitor
the Metadata:
Labels:
K8S-App: Kube-Scheduler
name: Kube-Scheduler # Name defined
namespace: Monitoring
spec:
Endpoints:

  • interval The: 30s
    Port: HTTP-# metrics defined here is the port name on the svc
    jobLabel: K8S-App
    namespaceSelector: # indicates which namespace match, configure any: true then go back all namespaces query
    matchNames:
    • kube-system
      Selector: this probably means that matching # kube-system having a namespace k8s-app = svc kube-scheduler tag
      matchLabels:
      K8S-App: Kube-Scheduler

Guess you like

Origin blog.51cto.com/79076431/2480861