Monitoring, level-liter places

Monitoring : need to install HeapSter components

 

Resource indicators : metrics-server

Custom metrics : prometheus k8s-prometheus-adapter

Custom Resource Definition

Development apiserver server

Resource indicators api

 

Next-generation architecture :

Pipeline core indicators : the kubelet, metrics-server and the API server provided api composition; providing cpu cumulative use of usage , memory usage in real time, POD utilization of resources , and disk usage of the container;

 

Monitoring line : for collecting various data from the system metrics and provide the end-user , storage , system and the HPA , which comprises a non-core and many core indicators indicators, indicators can be non-core k8s parsed

 

metrics-server:API server

 /apis/metrics.k8s.io/v1beta1

 

Deploy metrics-server to obtain core indicators

https://github.com/kubernetes-incubator/metrics-server/tree/master/deploy/1.8%2B

 

git clone https://github.com/kubernetes-incubator/metrics-server.git

cd /root/metrics/metrics-server-master/deploy/1.8+

metrics-server-deployment.yaml

Mirror address to : mirrorgooglecontainers / metrics-server-amd64 : v0.3.3

vim metrics-server-deployment.yaml

---

apiVersion: v1

kind: ServiceAccount

metadata:

  name: metrics-server

  namespace: kube-system

---

apiVersion: extensions/v1beta1

kind: Deployment

metadata:

  name: metrics-server

  namespace: kube-system

  labels:

    k8s-app: metrics-server

spec:

  selector:

    matchLabels:

      k8s-app: metrics-server

  template:

    metadata:

      name: metrics-server

      labels:

        k8s-app: metrics-server

    spec:

      serviceAccountName: metrics-server

      volumes:

      # mount in tmp so we can safely use from-scratch images and/or read-only containers

      - name: tmp dir

        emptyDir: {}

      containers:

      - name: metrics-server

        image: mirrorgooglecontainers / metrics-server- amd64: v0.3.3 modified address mirroring

        imagePullPolicy: Always

        command:       Added

        - / metrics-server   Added

        - --metric-resolution = 30s   Added

        - --kubelet-insecure-tls   Added

        - --kubelet-preferred-address-types=InternalIP 新加

        volumeMounts:

        - name: tmp dir

          mountPath: /tmp

 

vim resource-reader.yaml

rules:

- apiGroups:

  - ""

  resources:

  - pods

  - nodes

  - nodes/stats

  - namespaces

  verbs:

  - get

  - list

  - watch

 

kubectl apply -f ./

 

kubectl get pods -n kube-system  -o wide

kubectl describe pods -n kube-system metrics-server-95cc6867b-nm8g4

kubectl api-versions will appear metrics.k8s.io/v1beta1

use

Open Reverse Proxy

kubectl proxy --port=8080

Open another terminal data acquired

curl http://localhost:8080/apis/metrics.k8s.io/v1beta1

Gets node

 curl http://localhost:8080/apis/metrics.k8s.io/v1beta1/nodes

Gets pod

curl http://localhost:8080/apis/metrics.k8s.io/v1beta1/pods

Direct command

kubectl top nodes node1

kubectl top pod --all-namespaces

 

The second deployment method

https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/metrics-server

metrics-server-deployment.yaml

Mirror address to : mirrorgooglecontainers / metrics-server-amd64 : v0.3.3 and

mirrorgooglecontainers/addon-resizer:1.8.5

 

prometheus monitoring

Container logs : / var / logs / containers

 

prometheus architecture itself is a database

node_exporte --> prometheusr <--> PromQL <-- kube-state-metrics <-- custom metrics api (k8s-prometheus-adapater)

 

deploy

Official deploy https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/prometheus

The actual deployment https://github.com/ikubernetes/k8s-prom

 

process

mkdir prometheus && cd prometheus/

unzip k8s-prom-master.zip  && cd k8s-prom-master/

 

Create a namespace

kubectl apply -f namespace.yaml

 

Creating node_exporter  client agent

cd node_exporter/

kubectl apply -f ./

 

Creating prometheus body

cd prometheus/

kubectl apply -f ./

Test Access http://192.168.81.10:30090/graph

 

Create kube-state-metrics data is converted into the format which can be recognized k8s

cd kube-state-metrics/

Modify mirror address vim kube-state-metrics-deploy.yaml

mirrorgooglecontainers/kube-state-metrics-amd64:v1.3.1

kubectl apply -f ./

 

Creating k8s-prometheus-adapter is a metric of apiserver

cd k8s-prometheus-adapter/

(umask 077;openssl genrsa -out serving.key 2048)    -->key私钥

openssl req -new -key serving.key -out serving.csr -subj "/CN=serving"   -->签名请求

openssl x509 -req -in serving.csr -CA ./ca.crt -CAkey ./ca.key -CAcreateserial -out serving.crt -days 365 -> signed certificate

kubectl create secret generic cm-adapter-serving-certs --from-file=serving.crt --from-file=serving.key -n prom创建secret

 

rm -rf custom-metrics-apiserver-  deployment.yaml replace deploy version

wget https://raw.githubusercontent.com/DirectXMan12/k8s-prometheus-adapter/master/deploy/manifests/custom-metrics-apiserver-deployment.yaml

vim custom-metrics-apiserver-deployment.yaml namespace instead prom

 

Download configmap

wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/prometheus/prometheus-configmap.yaml

vim prometheus-configmap.yaml namespace instead prom

metadata:

  name: adapter-config

 

https://hub.docker.com/r/directxman12/k8s-prometheus-adapter-amd64/tags  mirror source

 

kubectl apply -f ./

 

Another way to create k8s-prometheus-adapter mode

Download https://github.com/DirectXMan12/k8s-prometheus-adapter/tree/master/deploy/manifests  replace k8s-prometheus-adapter directory, substitute the name space custom-metrics for the prom

Then apply all of the files in this directory

kubectl exec -it   -n prom custom-metrics-apiserver-fd48dd8c-khrkn -- /bin/sh

kubectl api-versions

custom.metrics.k8s.io/v1beta1

kubectl proxy --port=8080

Another terminal

curl http://localhost:8080/apis/custom.metrics.k8s.io/v1beta1/

 

Integration grafana

cp grafana.yaml ../prometheus/k8s-prom-master

vim grafana.yaml

namespace: prom

service

  namespace: prom

nodePort: 32002

#- name: INFLUXDB_HOST

# Value: monitoring-influxdb comment influxdb

 

kubectl Apply -f grafana.yam

Set grafana

Name:Prometheus

Type:Prometheus

URL http://prometheus.prom.svc:9090  

You can save test

 

Template Download

https://grafana.com/grafana/dashboards/8588 -->download_json

 

Home daboard the + -> Import -> upload json -> select a data source

Recommended to download Kubernetes Cluster (Prometheus) template

 

HPA use scale level rise automatically places

kubectl explain hpa

kubectl explain hpa.spec

kubectl run myapp --image=ikubernetes/myapp:v1 --replicas=1 --requests='cpu=50m,memory=256Mi'  

--limits='cpu=50m,memory=256Mi' --labels='app=myapp' --expose --port=80

                                                            Exposed port

AUTOMATIC places

kubectl autoscale deployment myapp --min=1 --max=8 --cpu-percent=60

       Minimum pod pod cpu use up to a maximum of 60%

kubectl get hpa                                         

 

Pressure test:

yum install httpd-tools -y

ab -c 100 -n 50000 http://10.96.150.45/index.html

kubectl describe hpa

kubectl delete deployments.apps myapp

kubectl delete pod pod-demo --force --grace   -period = 0 forcibly remove pod

 

Use hpa2

kubectl run myapp --image=ikubernetes/myapp:v1 --replicas=1 --requests='cpu=50m,memory=256Mi'  

--limits='cpu=50m,memory=256Mi' --labels='app=myapp' --expose --port=80

 

vim hpa-v2-demo.yaml

apiVersion: autoscaling/v2beta1

kind: HorizontalPodAutoscaler

metadata:

  name: myapp-hpa-v2-2

spec:   Specification

  scaleTargetRef:   What References indicators to define  

    apiVersion: apps / v1  on who is going to expand

    kind: Deployment  to deploy to expand

    name: myapp-custome  deploy

  minReplicas: 1  Minimum

  maxReplicas: 10  Up

  metrics:  What assessment based on indicators

  - type: Resource  -based resources

   resource:  Resources

     name: CPU  -based cpu

     targetAverageUtilization: 55 Chaoguo 60% on expansion

  - type: Resource

   resource:

     name: memory  -based memory

     targetAverageValue: 50Mi  than 50Mi on expansion

 

kubectl apply -f  hpa-v2-demo.yaml

kubectl get hpa

Pressure-test

ab -c 100 -n 50000 http://10.96.150.45/index.html

 

hpa2 example

kubectl run myapp-custom --image=ikubernetes/metrics-app --replicas=1 --requests='cpu=50m,memory=256Mi'  --limits='cpu=50m,memory=256Mi'--labels='app=myapp' --expose --port=80

 

vim hpa-v2-custom.yaml

apiVersion: autoscaling/v2beta1

kind: HorizontalPodAutoscaler

metadata:

  name: myapp-hpa-v2

spec:

  scaleTargetRef:

    apiVersion: apps/v1

    kind: Deployment

    name: myapp-custom

  minReplicas: 1

  maxReplicas 10

  metrics:

  - type: Pods   of pods indicator output

    pods:

      metricName: http_requests  custom indicators http_requests concurrent

      targetAverageValue: 800m   Chaoguo 800 concurrent to expand

 

Pressure measurement

ab -c 100 -n 50000 http://10.102.237.99/index.html

Guess you like

Origin www.cnblogs.com/leiwenbin627/p/11361518.html