Use prometheus monitor each node traefik, redis, k8s cluster, each node kubelet

1, Prometheus data indicator is disclosed through a HTTP (S) of the acquired data interface, we do not need to install a separate monitoring agent, only the exposed interface to a metrics, Prometheus will periodically to pull data; for some common HTTP service, we can directly reuse this service, add a / metrics interfaces exposed to Prometheus
2, there are some services even without native integration of the interface, but also can use some exporter to acquire the index data, such as mysqld_exporter, node_exporter, redis- exporter, the exporter is a bit similar to the traditional monitoring service agent, indicators used to collect data and target services directly exposed to Prometheus.

Monitoring interface comes metric Traefik
. 1, edit its configuration file traefik.toml, add the following, open interfaces metirc

[metrics]
  [metrics.prometheus]
    entryPoint = "traefik"
    buckets = [0.1, 0.3, 1.2, 5.0]

2, and then update traefik configmap POD traefik
$ kubectl GET ConfigMap -n Kube-System
traefik-conf 1 83d
$ kubectl the Delete ConfigMap traefik-conf -n Kube-System
$ kubectl the Create ConfigMap traefik the --from-File-conf = traefik. -n Kube-System toml
$ kubectl the Apply -f traefik.yaml
$ kubectl GET svc -n Kube-System | grep traefik
traefik-Ingress-Service NodePort 10.100.222.78 <none> 80: 31 657 / TCP, 8080: 31572 / TCP
$ 10.100.222.78:8080/metrics curl
$ curl 192.168.1.243:31572/metrics
3, update prometheus profile, increase job

apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus-config
  namespace: kube-ops
data:
  prometheus.yml: |
    global:
      scrape_interval: 30s
      scrape_timeout: 30s

    scrape_configs:
    - job_name: 'prometheus'
      static_configs:
        - targets: ['localhost:9090']

    - job_name: 'traefik'
      static_configs:
        - targets: ['traefik-ingress-service.kube-system.svc.cluster.local:8080']

$ Kubectl apply -f prome-cm.yaml # prometheus configemap update file
because we here Traefik corresponding servicename is traefik-ingress-service, and kube-system namespace below this, so we are here to configure the path of the targets you need to use FQDN form: traefik-Ingress-service.kube-system.svc.cluster.local
$ kubectl GET svc -n Kube-OPS | grep Prometheus
Prometheus NodePort 10.102.197.83 <none> 9090: 32619 / TCP
$ curl -X POST " HTTP : //192.168.1.243: 32619 / - / reload "# enable the configuration, it may take some time, do not use the reload command to update prometheus pod to validate the configuration

Use redis-exporter to monitor service redis
redis-exporter in the form and redis sidecar deployed within the same pod
1, constructed pod and svc
$ Docker pull redis: 4
$ Docker pull oliver006 / redis_exporter: Latest

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis
  namespace: kube-ops
spec:
  template:
    metadata:
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "9121"
      labels:
        app: redis
    spec:
      containers:
      - name: redis
        image: redis:4
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379
      - name: redis-exporter
        image: oliver006/redis_exporter:latest
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 9121
---
kind: Service
apiVersion: v1
metadata:
  name: redis
  namespace: kube-ops
spec:
  selector:
    app: redis
  ports:
  - name: redis
    port: 6379
    targetPort: 6379
  - name: prom
    port: 9121
    targetPort: 9121

Svc -n Kube kubectl GET $-OPS | grep Redis
Redis ClusterIP 10.105.241.59 <none> 6379 / TCP, 9121 / TCP
$ curl 10.105.241.59:9121/metrics # monitor to see if that
2, increase job and updates prometheus configmap profiles

apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus-config
  namespace: kube-ops
data:
  prometheus.yml: |
    global:
      scrape_interval: 30s
      scrape_timeout: 30s
    scrape_configs:
    - job_name: 'prometheus'
      static_configs:
      - targets: ['localhost:9090']
    - job_name: 'traefik'
      static_configs:
      - targets: ['traefik-ingress-service.kube-system.svc.cluster.local:8080']
    - job_name: 'redis'
      static_configs:
      - targets: ['redis:9121']

Since we here redis service and Prometheus in the same namespace, so we directly use the SERVICENAME
$ kubectl the Apply -f Prometheus-cm.yaml # update the configuration
$ kubectl GET svc -n Kube-OPS | grep Prometheus
Prometheus NodePort 10.102.197.83 <none > 9090: 32619 / TCP
$ curl -X POST " http://10.102.197.83:9090/-/reload " # validate the configuration
http: // http: //192.168.1.243: 32619 / targets

Using the node-exporter and monitor all the nodes in each node of the cluster k8s kubelet
. 1, node-exporter deployment
to deploy the service through DaemonSet controller, so that each node will automatically run such a Pod

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: node-exporter
  namespace: kube-ops
  labels:
    name: node-exporter
spec:
  template:
    metadata:
      labels:
        name: node-exporter
    spec:
      hostPID: true
      hostIPC: true
      hostNetwork: true
      containers:
      - name: node-exporter
        image: prom/node-exporter:v0.16.0
        ports:
        - containerPort: 9100
        resources:
          requests:
            cpu: 0.15
        securityContext:
          privileged: true
        args:
        - --path.procfs
        - /host/proc
        - --path.sysfs
        - /host/sys
        - --collector.filesystem.ignored-mount-points
        - '"^/(sys|proc|dev|host|etc)($|/)"'
        volumeMounts:
        - name: dev
          mountPath: /host/dev
        - name: proc
          mountPath: /host/proc
        - name: sys
          mountPath: /host/sys
        - name: rootfs
          mountPath: /rootfs
      tolerations:
      - key: "node-role.kubernetes.io/master"
        operator: "Exists"
        effect: "NoSchedule"
      volumes:
        - name: proc
          hostPath:
            path: /proc
        - name: dev
          hostPath:
            path: /dev
        - name: sys
          hostPath:
            path: /sys
        - name: rootfs
          hostPath:
            path: /

Since we want to get to the data monitoring metrics data from the host, and our node-exporter is running in the container, so we need to configure some Pod security policy in the Pod in, where we will add the hostPID: true, hostIPC : true, hostNetwork: true3 a strategy to use the host's PID namespace, IPC namespace as well as the host network, these namespace is the key technology for the isolation of the container, pay attention to where the namespace and cluster namespace are two completely different the concept of.
As specified by the hostNetwork = true, it will be binding on each node in a 9100 port, we can go to get to the monitoring indicator data through this port
$ curl 127.0.0.1:9100/metrics
$ curl 127.0.0.1:10255/metrics by # 10255 port monitoring kubelet
2, increase job, configuration update prometheus

apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus-config
  namespace: kube-ops
data:
  prometheus.yml: |
    global:
      scrape_interval: 15s
      scrape_timeout: 15s
    scrape_configs:
    - job_name: 'prometheus'
      static_configs:
      - targets: ['localhost:9090']
    - job_name: 'traefik'
      static_configs:
      - targets: ['traefik-ingress-service.kube-system.svc.cluster.local:8080']
    - job_name: 'redis'
      static_configs:
      - targets: ['redis:9121']
    - job_name: 'kubernetes-nodes'
      kubernetes_sd_configs:
      - role: node
      relabel_configs:
      - source_labels: [__address__]
        regex: '(.*):10250'
        replacement: '${1}:9100'
        target_label: __address__
        action: replace
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)
    - job_name: 'kubernetes-kubelet'
      kubernetes_sd_configs:
      - role: node
      relabel_configs:
      - source_labels: [__address__]
        regex: '(.*):10250'
        replacement: '${1}:10255'
        target_label: __address__
        action: replace
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)

In Kubernetes, Promethues through integration with Kubernetes API, currently supports five main service discovery modes, namely: Node, Service, Pod, Endpoints , Ingress.
kubelet default listening port, respectively, 10250, 10255,10248
$ vim /var/lib/kubelet/config.yaml
healthzPort: 10248
Port: 10250
readOnlyPort: 10255
Prometheus to discover the service Node mode when the default access port is 10250 and now the port below has no / metrics index data, and now kubelet read-only unified data interface via 10255 port exposed, so we should go to replace the port here, but we are to be replaced with 10255 ports? No, because we are going to crawl through the above configuration node-exporter of index data nodes, while above us is not specified hostNetwork = true, it will be binding on each node in a port 9100, so we should here's to replace the 10250 9100
$ kubectl the Apply -f Prometheus-cm.yaml
$ kubectl GET svc -n Kube-OPS | grep Prometheus
Prometheus NodePort 10.102.197.83 <none> 9090: 32619 / TCP
$ curl -X POST " HTTP: //10.102.197.83:9090/-/reload "# validate the configuration

Guess you like

Origin blog.51cto.com/dongdong/2432584