Detailed pod health check (liveness, readiness, rollover)

Environment Introduction

Host computer IP addresses service
master 192.168.1.21 k8s+httpd+nginx
node01 192.168.1.22 k8s
node02 192.168.1.23 k8s

Based on [ https://blog.51cto.com/14320361/2464655 continue] () Experiment

A, Pod readiness of the probe and liveness

  Kubelet use liveness probe (probe survival) to determine when to restart the container. For example, when the application is running but unable to do further operations, liveness probe will capture the deadlock, restart the container in this state, the application is still able to continue running forever in the presence of a bug
  Kubelet use readiness probe (Ready probe) to determine whether the vessel can be ready to accept traffic. Only when the Pod in containers in a ready state kubelet will identify the Pod ready. What is the role of the signal should be controlled as Pod rear end of service. If the Pod in a non-ready state, then they will be removed from the load balancer service in.

Probe supports the following three methods:

<1> exec- command

Within a user performs a command vessel, the exit code is 0 if the command execution is considered applications up and running, other task applications not working properly.

  livenessProbe:
    exec:
      command:
      - cat
      - /home/laizy/test/hostpath/healthy

<2>TCPSocket

Will try to open a container of user Socket connection (that is, IP address: port). If we can establish this connection, it is considered applications up and running, or that the application is not functioning properly.

livenessProbe:
tcpSocket:
   port: 8080

<3>HTTPGet

Web application calls the container web hook, if the returned HTTP status code between 200 and 399 is considered applications up and running, or that the application is not functioning properly. Each HTTP health check once again will visit the specified URL.

  httpGet: #通过httpget检查健康,返回200-399之间,则认为容器正常
    path: / #URI地址
    port: 80 #端口号
    #host: 127.0.0.1 #主机地址
    scheme: HTTP #支持的协议,http或者https
  httpHeaders:’’ #自定义请求的header

Parameter Description

initialDelaySeconds: the first execution after the vessel started probing is how many seconds to wait.

periodSeconds: to perform a probe frequency. The default is 10 seconds, a minimum of 1 second.

timeoutSeconds: probe timeout. Default 1 second, a minimum of 1 second.

successThreshold: After the probe fails, the least successful of consecutive probe how many times was only recognized as a success. The default is 1. For liveness must be 1. The minimum value is 1.

The results of probing one of the following three:

Success: Container passed the examination.
Failure: Container failed inspection.
Unknown: Failed to execute check, so do not take any action.

1. LivenessProbe (activity)

(1) the preparation of a livenss yaml file

[root@node02 ~]# vim livenss.yaml
kind: Pod
apiVersion: v1
metadata:
  name: liveness
  labels:
    test: liveness
spec:
  restartPolicy: OnFailure
  containers:
  - name: liveness
    image: busybox
    args:
    - /bin/sh
    - -c
    - touch /tmp/test; sleep 60; rm -rf /tmp/test; sleep 300
    livenessProbe:              #存活探测
      exec:                     #通过执行命令来检查服务是否正常
        command:                #命令模式
        - cat
        - /tmp/test
      initialDelaySeconds: 10    #pod运行10秒后开始探测
      periodSeconds: 5           #检查的频率,每5秒探测一次

Pod configuration file to the configuration of a container. periodSeconds predetermined kubelet every 5 seconds to perform a liveness probe. initialDelaySeconds tell kubelet before the first execution probe to wait 10 seconds. Probe detection command is executed cat / tmp / healthy command in the container. If the command is successful, it returns 0, kubelet will conclude that the container is alive and very healthy. If it returns a non-zero value, kubelet will kill the container and restart it.

(2) run it

[root@master ~]# kubectl apply -f liveness.yaml 

(3) look

[root@master ~]# kubectl get pod -w

Detailed pod health check (liveness, readiness, rollover)

Liveness activity detection, depending on whether a file exists probe to determine whether a service is running normally, if there is a normal, responsible, it will operate according to reboot Pod Pod of policy you set.

2. Readiness (sensitive detection, detection Readiness)

Use scene livenessProbe ReadinessProbe probe is slightly different, sometimes an application may be temporarily unable to accept the request, such as Pod already Running, but within a container application has not been started successfully, in this case, if there is no ReadinessProbe, then think Kubernetes it can handle the request, but this time, we know that the program did not start successfully is not receiving a user request, so I do not want kubernetes the request to dispatch it, then use ReadinessProbe probe.
ReadinessProbe and livenessProbe can use the same detection methods, but different disposition of the Pod, ReadinessProbe is the Pod IP: Port delete from EndPoint corresponding to the list, whereas livenessProbe the Kill vessel and to the decisions taken corresponding measures according to the restart strategy Pod's.
ReadinessProbe probing whether the vessel is ready, if not ready then kubernetes does not forward traffic to this Pod.
ReadinessProbe probe livenessProbe as supports exec, httpGet, TCP detection mode, the same configuration, but is modified livenessProbe field ReadinessProbe.

(1) preparation of a readiness of yaml file

[root@master ~]# vim readiness.yaml 
kind: Pod
apiVersion: v1
metadata:
  name: readiness
  labels:
    test: readiness
spec:
  restartPolicy: Never
  containers:
  - name: readiness
    image: busybox
    args:
    - /bin/sh
    - -c
    - touch /tmp/test; sleep 60; rm -rf /tmp/test; sleep 300
    readinessProbe:
      exec:
        command:
        - cat
        - /tmp/test
      initialDelaySeconds: 10
      periodSeconds: 5

(2) run it

[root@master ~]# kubectl apply -f readiness.yaml 

(3) look

[root@master ~]# kubectl get pod -w

Detailed pod health check (liveness, readiness, rollover)

3. Summary liveness detection and readiness

(1) liveness and readiness are two health check mechanism, k8s The two probe to take the same default behavior, namely the return value is zero by the container to determine the boot process, determining whether to monitor success.

(2) two kinds of detection methods exactly the same configuration, except that the behavior of the probe failure.

liveness detection is based on the strategy to restart the operation of the vessel, most of the container is restarted.

readiness sucked container is set as unavailable, the request does not receive the Service forwarded.

(3) in two detection methods may be recommended by the independent presence, it can also be present simultaneously. With livensess determine whether to restart, to achieve self-healing; with a readiness to determine whether the container is ready to provide services.

Second, the detection of applications

1. Application of scale (expansion / contraction capacity).

(1) preparation of a readiness of yaml file

[root@master ~]# vim hcscal.yaml
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: web
spec:
  replicas: 3
  template: 
    metadata:
      labels:
        run: web
    spec:
      containers:
      - name: web
        image: httpd
        ports:
        - containerPort: 80
        readinessProbe:
          httpGet:
            scheme: HTTP   #探测的协议
            path: /healthy  #访问的目录
            port: 80
          initialDelaySeconds: 10
          periodSeconds: 5

---
kind: Service
apiVersion: v1
metadata:
  name: web-svc
spec:
  type: NodePort
  selector:
    run: web
  ports:
  - protocol: TCP
    port: 90
    targetPort: 80
    nodePort: 30321

In the configuration file, using httpd image, create a Pod, wherein periodSeconds field specifies kubelet every 5 seconds to perform a detection, initialDelaySeconds field tells kubelet delay wait 10 seconds, detection methods to send an HTTP GET request to a service running in the container, request / healthz in 8080, greater than or equal to no less than 200 and 400 of the code indicates success. Any other code indicates failure.

(2) httpGet detection methods have the following optional control field

host: the host name to be connected, by default Pod IP, in the host header may be provided in the http request head.
scheme: protocol connection for the host, the default is HTTP.
path: visit http URI on the server.
httpHeaders: custom HTTP request headers, HTTP allows repeated headers.
port: To access the port number or name on the container.

(3) run it

[root@master ~]# kubectl apply -f readiness.yaml 

(4) look

[root@master ~]# kubectl get pod -w

Detailed pod health check (liveness, readiness, rollover)

[root@master ~]# kubectl get pod -o wide

Detailed pod health check (liveness, readiness, rollover)

[root@master ~]# kubectl get service -o wide

Detailed pod health check (liveness, readiness, rollover)

(5) access it

[root@master ~]# curl  10.244.1.21/healthy

Detailed pod health check (liveness, readiness, rollover)

(6) pod to create a file in the specified directory

[root@master ~]# kubectl exec web-69d659f974-7s9bc touch /usr/local/apache2/htdocs/healthy

(7) look

[root@master ~]# kubectl get pod -w

Detailed pod health check (liveness, readiness, rollover)

2. In the update process

(1) preparation of a readiness of yaml file

[root@master ~]# vim app.v1.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: app
spec:
  replicas: 10
  template:
    metadata:
      labels:
        run: app
    spec:
      containers:
      - name: app
        image: busybox
        args:
        - /bin/sh
        - -c
        - sleep 10; touch /tmp/healthy; sleep 3000
        readinessProbe:
          exec:
            command:
            - cat
            - /tmp/healthy
          initialDelaySeconds: 10
          periodSeconds: 5

(2) run about and record the version information

[root@master ~]# kubectl apply -f readiness.yaml --record 
Look
[root@master ~]# kubectl rollout history deployment app 

Detailed pod health check (liveness, readiness, rollover)

(3) look

[root@master ~]# kubectl get pod -w

Detailed pod health check (liveness, readiness, rollover)

3. Upgrade bit Deployment

(1) preparation of a readiness of yaml file

[root@master ~]# cp app.v1.yaml app.v2.yaml 
[root@master ~]# vim app.v2.yaml 

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: app
spec:
  replicas: 10
  template:
    metadata:
      labels:
        run: app
    spec:
      containers:
      - name: app
        image: busybox
        args:
        - /bin/sh
        - -c
        - sleep 3000        #修改命令
        readinessProbe:
          exec:
            command:
            - cat
            - /tmp/healthy
          initialDelaySeconds: 10
          periodSeconds: 5

(2) run about and record the version information

[root@master ~]# kubectl apply -f readiness.yaml --record 
Look
[root@master ~]# kubectl rollout history deployment app 

Detailed pod health check (liveness, readiness, rollover)

(3) look

[root@master ~]# kubectl get pod -w

Detailed pod health check (liveness, readiness, rollover)

(4) look to upgrade deployment again

<1> write a readiness of yaml file
[root@master ~]# cp app.v1.yaml app.v3.yaml 
[root@master ~]# vim app.v2.yaml 

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: app
spec:
  replicas: 10
  template:
    metadata:
      labels:
        run: app
    spec:
      containers:
      - name: app
        image: busybox
        args:
        - /bin/sh
        - -c
        - sleep 3000        #修改命令
<2> run about and record the version information
[root@master ~]# kubectl apply -f readiness.yaml --record 
Look
[root@master ~]# kubectl rollout history deployment app 

Detailed pod health check (liveness, readiness, rollover)

<3> look
[root@master ~]# kubectl get pod -w

Detailed pod health check (liveness, readiness, rollover)

4. v2 version rollback

[root@master ~]# kubectl rollout undo deployment app --to-revision=2

Look

[root@master ~]# kubectl get pod

Detailed pod health check (liveness, readiness, rollover)

(1) preparation of a readiness of yaml file

[root@master ~]# vim app.v2.yaml 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: app
spec:
  strategy:
    rollingUpdate:
      maxSurge: 2
      maxUnavailable: 2
  replicas: 10
  template:
    metadata:
      labels:
        run: app
    spec:
      containers:
      - name: app
        image: busybox
        args:
        - /bin/sh
        - -c
        - sleep 3000
        readinessProbe:
          exec:
            command:
            - cat
            - /tmp/healthy
          initialDelaySeconds: 10
          periodSeconds: 5

Parameter Description

minReadySeconds:

Kubernetes在等待设置的时间后才进行升级
如果没有设置该值,Kubernetes会假设该容器启动起来后就提供服务了
如果没有设置该值,在某些极端情况下可能会造成服务服务正常运行

maxSurge:

升级过程中最多可以比原先设置多出的POD数量
例如:maxSurage=1,replicas=5,则表示Kubernetes会先启动1一个新的Pod后才删掉一个旧的POD,整个升级过程中最多会有5+1个POD。

maxUnavaible:

升级过程中最多有多少个POD处于无法提供服务的状态
当maxSurge不为0时,该值也不能为0
例如:maxUnavaible=1,则表示Kubernetes整个升级过程中最多会有1个POD处于无法服务的状态。

(2) 运行一下并记录版本信息

[root@master ~]# kubectl apply -f app.v2.yaml --record 
查看一下
[root@master ~]# kubectl rollout history deployment app 

Detailed pod health check (liveness, readiness, rollover)

(3) 查看一下

[root@master ~]# kubectl get pod -w

Detailed pod health check (liveness, readiness, rollover)

三、小实验

1)写一个Deployment资源对象,要求2个副本,nginx镜像。使用Readiness探测,自定义文件/test是否存在,容器开启之后10秒开始探测,时间间隔为10秒。

(1)编写一个readiness的yaml文件
[root@master yaml]# vim nginx.yaml

kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: web
spec:
  replicas: 2
  template:
    metadata:
      labels:
        run: web
    spec:
      containers:
      - name: readiness
        image: 192.168.1.21:5000/nginx:v1
        readinessProbe:
          exec:
            command:
            - cat
            - /usr/share/nginx/html/test
          initialDelaySeconds: 10
          periodSeconds: 10
(2)运行一下并记录版本信息
[root@master ~]# kubectl apply -f nginx.yaml --record 
查看一下
[root@master ~]# kubectl rollout history deployment web 

Detailed pod health check (liveness, readiness, rollover)

(3)查看一下
[root@master ~]# kubectl get pod -w

Detailed pod health check (liveness, readiness, rollover)

2)在运行之后两个Pod里,进入一个Pod,创建文件/test。

[root@master yaml]# kubectl exec -it web-864c7cf7fc-gpxq4  /bin/bash
root@web-68444bff8-xm22z:/# touch /usr/share/nginx/html/test

查看一下

[root@master yaml]# kubectl get pod -w

Detailed pod health check (liveness, readiness, rollover)

3)创建一个Service资源对象,跟上述Deployment进行关联,运行之后,查看Service资源详细信息,确认EndPoint负载均衡后端Pod。

(1)编写service的yaml文件

[root@master yaml]# vim nginx-svc.yaml
kind: Service
apiVersion: v1
metadata:
  name: web-svc
spec:
  type: NodePort
  selector:
    run: web
  ports:
  - protocol: TCP
    port: 90
    targetPort: 80
    nodePort: 30321

(2)执行一下

[root@master yaml]# kubectl apply -f nginx-svc.yaml 

(3)给两个pod刚更改页面

查看一下pod
[root@master yaml]# kubectl get pod -o wide
更改页面
[root@master yaml]# kubectl exec -it  web-864c7cf7fc-gpxq4  /bin/bash
root@web-864c7cf7fc-gpxq4:/# echo "123">/usr/share/nginx/html/test
root@web-864c7cf7fc-gpxq4:/# exit

[root@master yaml]# kubectl exec -it  web-864c7cf7fc-pcrs9   /bin/bash
root@web-864c7cf7fc-pcrs9:/# echo "321">/usr/share/nginx/html/test
root@web-864c7cf7fc-pcrs9:/# exit

4)观察状态之后,尝试将另一个Pod也写入/test文件,然后再去查看SVC对应的EndPoint的负载均衡情况。

(1)查看一下service

[root@master yaml]# kubectl get service

Detailed pod health check (liveness, readiness, rollover)

(2)访问一下

[root@master ~]# curl 192.168.1.21:30321/test

Detailed pod health check (liveness, readiness, rollover)

5)通过httpGet的探测方式,重新运行一下deployment资源,总结对比一下这两种Readiness探测方式。

(1)修改deployment的yaml文件

[root@master yaml]# vim nginx.yaml 
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: web
spec:
  replicas: 2
  template:
    metadata:
      labels:
        run: web
    spec:
      containers:
      - name: readiness
        image: 192.168.1.21:5000/nginx:v1
        readinessProbe:
          httpGet:
            scheme: HTTP
            path: /usr/share/nginx/html/test
            port: 80
          initialDelaySeconds: 10
          periodSeconds: 10

(2)执行一下

[root@master yaml]# kubectl apply -f nginx.yaml 

(3)查看一下pod

[root@master yaml]# kubectl get pod -w

Detailed pod health check (liveness, readiness, rollover)

maxSurge:此参数控制滚动更新过程中,副本总数超过预期数的值。可以是整数,也可以是百分比,默认是1。所以现在是3台pod

(4)访问一下

[root@master yaml]# curl 192.168.1.21:30321/test

Detailed pod health check (liveness, readiness, rollover)

6)总结对比liveness和readiness探测的相同和不同之处,以及它们的使用场景。

<1>readiness和liveness的核心区别

实际上readiness 和liveness 就如同字面意思。readiness 就是意思是否可以访问,liveness就是是否存活。如果一个readiness 为fail 的后果是把这个pod 的所有service 的endpoint里面的改pod ip 删掉,意思就这个pod对应的所有service都不会把请求转到这pod来了。但是如果liveness 检查结果是fail就会直接kill container,当然如果你的restart policy 是always 会重启pod。

<2>什么样才叫readiness/liveness检测失败呢?

实际上k8s提供了3中检测手段,

http get 返回200-400算成功,别的算失败
tcp socket 你指定的tcp端口打开,比如能telnet 上
cmd exec 在容器中执行一个命令 推出返回0 算成功。
每中方式都可以定义在readiness 或者liveness 中。比如定义readiness 中http get 就是意思说如果我定义的这个path的http get 请求返回200-400以外的http code 就把我从所有有我的服务里面删了吧,如果定义在liveness里面就是把我kill 了。

<3> readiness readiness and use environment

For example, if an http service you want to access it once a problem I wanted to restart the container. Then you define a liveness detection means is http get. On the other hand, if there is a problem I do not want to let it restart, just want it to request delisting Do not let it be here. On configuration readiness.

Note, liveness not restart pod, pod whether you will restart controlled by the restart policy (restart strategy).

Reference:
https://www.jianshu.com/p/16a375199cf2

Guess you like

Origin blog.51cto.com/14320361/2466479