05-kubernetes Pod Advanced Application Controller

Pod resources

spec.containers <[]object>
- name <string>
  image <string>
  imagePullPolicy <string>
    Always, Never, IfNotPresent

label

key=value

key: letters, numbers -, .
value: can be empty, only the beginning or end of the number of letters, the intermediate may be used -,.

Check podsthe label:

[root@master manifests]# kubectl get pods --show-labels
NAME                            READY   STATUS    RESTARTS   AGE    LABELS
myapp-84cd4b7f95-44qch          1/1     Running   0          7d5h   pod-template-hash=84cd4b7f95,run=myapp
myapp-84cd4b7f95-fzvsd          1/1     Running   0          7d5h   pod-template-hash=84cd4b7f95,run=myapp
myapp-84cd4b7f95-mlphg          1/1     Running   0          7d5h   pod-template-hash=84cd4b7f95,run=myapp
nginx-deploy-7689897d8d-lf8p7   1/1     Running   0          7d6h   pod-template-hash=7689897d8d,run=nginx-deploy
pod-demo                        2/2     Running   0          29s    app=myapp,tier=frontend

Using the -lparameters to filter tag

[root@master manifests]# kubectl get pods -l app
NAME       READY   STATUS    RESTARTS   AGE
pod-demo   2/2     Running   0          3m21s

Resources to play tag

use kubectl label

Usage:
  kubectl label [--overwrite] (-f FILENAME | TYPE NAME) KEY_1=VAL_1 ... KEY_N=VAL_N [--resource-version=version] [options]

Tag selector

  • Relationship of: =, ==, = to operate!
  • Set relationship

KEY in (VALUE1,VALUE2,...)
KEY notin (VALUE1,VALUE2,...)

[root@master manifests]# kubectl get pods -l release
NAME                            READY   STATUS    RESTARTS   AGE
nginx-deploy-7689897d8d-lf8p7   1/1     Running   0          7d6h
pod-demo                        2/2     Running   0          11m
[root@master manifests]# kubectl get pods -l release=canary
NAME                            READY   STATUS    RESTARTS   AGE
nginx-deploy-7689897d8d-lf8p7   1/1     Running   0          7d6h
[root@master manifests]# kubectl get pods -l release,app
NAME       READY   STATUS    RESTARTS   AGE
pod-demo   2/2     Running   0          12m
[root@master manifests]# kubectl get pods -l release=stable,app=myapp
NAME       READY   STATUS    RESTARTS   AGE
pod-demo   2/2     Running   0          12m

Many resources to support custom tags embedded field selectors its use:

  1. matchLabels: directly given key value
  2. matchExpressions: based on a given expression used to define the tag selector, {key: "KEY", operator: "OPERATOR", values: [VAL1, VAL2, ...]}

Operator:

In, NotIn: the value of the field values must be non-empty list;
the Exists, NotExists: the value of the field values must be empty list;

nodeSelector <map[string]string>

节点标签选择器,是可以影响pods调度算法的。

Case:

[root@master manifests]# kubectl get pods -o wide
NAME                            READY   STATUS    RESTARTS   AGE   IP           NODE                NOMINATED NODE   READINESS GATES
nginx-deploy-7689897d8d-lf8p7   1/1     Running   0          8d    10.244.3.2   node02.kubernetes   <none>           <none>
[root@master manifests]# kubectl delete -f pod-demo.yaml 
pod "pod-demo" deleted
[root@master manifests]# vim pod-demo.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-demo
  namespace: default
  labels:
    app: myapp
    tier: frontend
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
  - name: busybox
    image: busybox:latest
    imagePullPolicy: IfNotPresent
    command: 
    - "/bin/sh"
    - "-c"
    - "sleep 3600"
  nodeSelector:
    disktype: ssd
[root@master manifests]# kubectl create -f pod-demo.yaml 
pod/pod-demo created
[root@master manifests]# kubectl get pods -o wide
NAME                            READY   STATUS    RESTARTS   AGE   IP            NODE                NOMINATED NODE   READINESS GATES
pod-demo                        2/2     Running   0          4s    10.244.3.10   node01.kubernetes   <none>           <none>

nodeName

指定调度到节点的名称

annoitations

与label不同的是,不能用于挑选资源对象,仅用于为对象提供"元数据"。

Pod life cycle

POd life cycle

Pod statement cycle is divided into two stages:

1. 初始化容器初始化阶段
2. 主容器正常运行阶段

Pod status:

1. 首先Pod进入初始化阶段,使用初始化容器完成初始化,可以有多个初始化容器,但是串行。
2. 初始化之后主容器进入正式运行阶段。
3. 主容器启动之后,会执行一次命令,就是 'post start',也就是启动后钩子,执行结束后就退出了。
4. 命令执行只有,会有一个主容器存活状态检测,就是'liveness probe'。
5. 同时也会做就容器就绪性检测,检测主容器是否正常运行,就是 'readiness probe'。
6. 当主容器要退出的时候,也会有个结束前执行的命令,就是 'pre stop',结束前钩子,这个Pod就退出。

Detection behavior:

  1. liveness probe # for detecting whether the survival of the main container
  2. readiness probe # determine the main process vessel is ready, whether external services

    Either way, both support three.

    1. Custom command
    2. TCP socket to the specified transmission request
    3. To the designated service sends HTTP request

Pod status:

Pending : 调度尚未完成,没有适合创建的节点,不满足存活的状态。
Runing  : 运行状态
Failed  : 失败状态
Succeeded   : 成功状态
Unknown : 位置状态

Create a Pod

1. 当用户创建Pod的时候,会去请求提交给apiserver,然后会把目标状态保存在ETCD中,
2. 然后 apiserver 会去请求 scheduler进行调度,并且把调度的结果保存在ETCD中之前保存的Pod状态中。
3. 一旦存在ETCD中发生更新,假如调度在node01上,node01 上的kubelet通过apiserver当中状态变化会知道。

restartPolicy:

Always, OnFailure, Never

Actual operation

There are three types of probes:

1. ExecAction
2. TCPsocketAction
3. HTTPGetAction

These three probes, livenessProbeand readinessProbecan be used.

livenessProbe combat

livenessProbe exec test

Create a file yaml Pod of:

[root@master manifests]# cat liveness-exec.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: liveness-exec-pod
  namespace: default
spec:
  containers:
  - name: liveness-exec-container
    image: busybox:latest
    imagePullPolicy: IfNotPresent
    command: ["/bin/sh", "-c", "touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 3600"] # 生成存活性文件
    livenessProbe:
      exec:
        command: ["test", "-e", "/tmp/healthy"]     # 使用命令探测文件是否存在
      initialDelaySeconds: 1    # 容器启动后,多久开始探测,单位秒
      periodSeconds: 3          # 每次探测间隔时间,单位秒
      successThreshold: 1       # 成功几次
      failureThreshold: 2       # 失败几次后重启
      timeoutSeconds: 2         # 开始探测后的超时时间,单位秒
[root@master manifests]# kubectl create -f liveness-exec.yaml 
pod/liveness-exec-pod created
[root@master manifests]# kubectl get pods
NAME                            READY   STATUS    RESTARTS   AGE
liveness-exec-pod               1/1     Running   0          4s

After wait for a while:

[root@master manifests]#kubectl get pods
NAME                            READY   STATUS    RESTARTS   AGE
liveness-exec-pod               1/1     Running   1          2m20s
[root@master manifests]# kubectl get pods
NAME                            READY   STATUS    RESTARTS   AGE
liveness-exec-pod               1/1     Running   2          2m40s

check the detail information:

[root@master manifests]# kubectl describe pod liveness-exec-pod
Name:         liveness-exec-pod
Namespace:    default
Priority:     0
Node:         node03.kubernetes/10.0.20.23
Start Time:   Fri, 19 Jul 2019 11:13:19 +0800
Labels:       <none>
Annotations:  <none>
Status:       Running
IP:           10.244.1.8
Containers:
  liveness-exec-container:
    Container ID:  docker://e028a5c4796c9e138e9669292f5b82fc76244017463039ef2141dab3da9d5cdd
    Image:         busybox:latest
    Image ID:      docker-pullable://busybox@sha256:c94cf1b87ccb80f2e6414ef913c748b105060debda482058d2b8d0fce39f11b9
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sh
      -c
      touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 3600
    State:          Running
      Started:      Fri, 19 Jul 2019 11:14:34 +0800
    Last State:     Terminated
      Reason:       Error   # 这里提示错误
      Exit Code:    137
      Started:      Fri, 19 Jul 2019 11:13:20 +0800
      Finished:     Fri, 19 Jul 2019 11:14:34 +0800
    Ready:          True
    Restart Count:  1       # 提示被重启过一次
    Liveness:       exec [test -e /tmp/healthy] delay=1s timeout=2s period=10s #success=1 #failure=2
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-bc86p (ro)
      ........
      ........

livenessProbe httpGet test

Create a file yaml Pod of:

[root@master manifests]# cat liveness-httpget.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: liveness-httpget-pod
  namespace: default
spec:
  containers:
  - name: liveness-httpget-container
    image: ikubernetes/myapp:v1     # 此镜像自带80端口
    imagePullPolicy: IfNotPresent
    ports:
    - name: http            # 这里定义映射端口名称
      containerPort: 80     # 指定端口
    livenessProbe:
      httpGet:
        port: http          # 这里直接使用定义的端口名称即可
        path: /index.html   # 定义探测http对应的路径
      initialDelaySeconds: 1
      periodSeconds: 3
      successThreshold: 1
      failureThreshold: 2
      timeoutSeconds: 2
[root@master manifests]# kubectl create -f liveness-httpget.yaml 
pod/liveness-httpget-pod created
[root@master manifests]# kubectl get pods
NAME                            READY   STATUS             RESTARTS   AGE
liveness-httpget-pod            1/1     Running            0          3s

Manually into the container, and then delete index.htmlthe file

[root@master ~]# kubectl exec -it liveness-httpget-pod -- /bin/sh
/ # rm -f /usr/share/nginx/html/index.html
/ # command terminated with exit code 137

Check back pod state just created:

[root@master manifests]# kubectl get pods
NAME                            READY   STATUS    RESTARTS   AGE
liveness-httpget-pod            1/1     Running   1          7m14s      # 这里可以看到已经被重启过一次了
[root@master manifests]# kubectl describe pods liveness-httpget-pod     # 查看详细信息
Name:         liveness-httpget-pod
Namespace:    default
Priority:     0
Node:         node02.kubernetes/10.0.20.22
Start Time:   Fri, 19 Jul 2019 11:26:56 +0800
Labels:       <none>
Annotations:  <none>
Status:       Running
IP:           10.244.2.6
Containers:
  liveness-httpget-container:
    Container ID:   docker://39d3905acd7dac561905e77772522e652843dc3f2e0023a07586d3db088ca87f
    Image:          ikubernetes/myapp:v1
    Image ID:       docker-pullable://ikubernetes/myapp@sha256:9c3dc30b5219788b2b8a4b065f548b922a34479577befb54b03330999d30d513
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Fri, 19 Jul 2019 11:29:06 +0800
    Last State:     Terminated  # 状态不正常
      Reason:       Completed
      Exit Code:    0
      Started:      Fri, 19 Jul 2019 11:26:57 +0800
      Finished:     Fri, 19 Jul 2019 11:29:06 +0800
    Ready:          True
    Restart Count:  1           # 这里看到被重启过一次
    Liveness:       http-get http://:http/index.html delay=1s timeout=2s period=3s #success=1 #failure=2
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-bc86p (ro)
.... ....
.... ....

Here it will only be restarted once, because when readinessProbe reconnaissance to the container is not in a ready state, the container will restart, restart the container index.htmlfile will be re-production.

readinessProbe readiness probe

Create a file yaml Pod, configuration readinessProbe ready probe:

[root@master manifests]# cat readiness-httpget.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: readiness-httpget-pod
  namespace: default
spec:
  containers:
  - name: readiness-httpget-container
    image: ikubernetes/myapp:v1
    imagePullPolicy: IfNotPresent
    ports:
    - name: http
      containerPort: 80
    readinessProbe:
      httpGet:
        port: http
        path: /index.html
      initialDelaySeconds: 1
      periodSeconds: 3
      successThreshold: 1
      failureThreshold: 2
      timeoutSeconds: 2
[root@master manifests]# kubectl get pods
NAME                            READY   STATUS    RESTARTS   AGE
readiness-httpget-pod           1/1     Running   0          53s

Into the readiness-httpget-podmiddle, manually delete the index.htmlpage file to see again:

[root@master ~]# kubectl exec -it readiness-httpget-pod -- /bin/sh
/ # rm -rf /usr/share/nginx/html/index.html 
/ # ps  aux
PID   USER     TIME   COMMAND
    1 root       0:00 nginx: master process nginx -g daemon off;    # nginx依然运行这
    6 nginx      0:00 nginx: worker process
    7 root       0:00 /bin/sh
   13 root       0:00 ps aux
[root@master manifests]# kubectl get pods
NAME                            READY   STATUS    RESTARTS   AGE
readiness-httpget-pod           0/1     Running   0          60s    # 就绪状态的pod为 0 个
[root@master manifests]# kubectl describe pods readiness-httpget-pod
Name:         readiness-httpget-pod
Namespace:    default
Priority:     0
Node:         node03.kubernetes/10.0.20.23
Start Time:   Mon, 22 Jul 2019 09:09:50 +0800
Labels:       <none>
Annotations:  <none>
Status:       Running
IP:           10.244.1.9
Containers:
  readiness-httpget-container:
    Container ID:   docker://71009463c970b6712d97c8692990caa8c2192841b6e075a1d0db9ac15a2d1cd7
    Image:          ikubernetes/myapp:v1
    Image ID:       docker-pullable://ikubernetes/myapp@sha256:9c3dc30b5219788b2b8a4b065f548b922a34479577befb54b03330999d30d513
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Mon, 22 Jul 2019 09:09:51 +0800
    Ready:          False
    Restart Count:  0
    Readiness:      http-get http://:http/index.html delay=1s timeout=2s period=3s #success=1 #failure=2
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-bc86p (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False   # 详细信息中,显示为错误。
  ContainersReady   False 
  PodScheduled      True
  .... ....
  .... ....

When the probe is ready Pod found this is not available, servicethe scheduler will reject this pod clusters

This manual created Pod in the index.htmlfile:

/ # echo 'hi' > /usr/share/nginx/html/index.html
[root@master manifests]# kubectl get pods
NAME                            READY   STATUS    RESTARTS   AGE
readiness-httpget-pod           1/1     Running   0          118s

It has been restored, this time servicethey put this Pod rejoin the cluster.

Pod Controller

  • ReplicaSet:

    1. Pod number of copies desired by a user, make up less support multi deleted
    2. Tag selector, again choose their own management and control Pod copy
    3. Pod Pod resource template to complete a new resource
  • Deployment:

    1. Work on ReplicaSet, does not directly control Pod, but to control ReplicaSet
    2. Support rolling updates and rollbacks, can rollover or roll back
    3. To provide declarative configuration capabilities
  • DaemonSet

    1. Stateless, it must be a daemon class,
    2. Kubernetes ensure that all nodes run only a Pod
    3. Protection of system-level background task
    4. No state can only control applications, focus groups, not individual attention
  • Job

    1. You can only perform a single job
  • Cronjob

    1. Periodic execution, no do not start
  • StatefulSet

    1. Stateful application management
  • TPR

    Third Party Resources, 1.2 version began to be used, the 1.7 version is not available
    third-party resources

  • CDR

    Custom Defined Resource, after the 1.8 version is available
    third-party resources

ReplicaSet controller

Create a manifest file ReplicaSet

[root@master manifests]# cat rs-demo.yaml 
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: myapp
  namespace: default
spec:
  replicas: 2       # Pod 副本数为 2
  selector:
    matchLabels:    # 定义标签选择器
      app: myapp
      release: canary
  template:         # 这里以下定义的和一个单独的Pod清单文件一样
    metadata:       # 定义Pod对应的标签,会被ReplicaSet控制器选择
      name: myapp-pod
      namespace: default
      labels:
        app: myapp
        release: canary
        environment: qa
    spec:
      containers:
      - name: myapp-container
        image: ikubernetes/myapp:v1
        imagePullPolicy: IfNotPresent
        ports:
        - name: http
          containerPort: 80
[root@master manifests]# kubectl create -f rs-demo.yaml 
replicaset.apps/myapp created
[root@master manifests]# kubectl get pods -o wide
NAME          READY   STATUS    RESTARTS   AGE   IP            NODE                NOMINATED NODE   READINESS GATES
myapp-k4m9l   1/1     Running   0          27s   10.244.1.13   node03.kubernetes   <none>           <none>
myapp-p9dk9   1/1     Running   0          27s   10.244.3.14   node01.kubernetes   <none>           <none>

Two copies of the Pod has been created.

It can be used directly kubectl editto edit:

[root@master manifests]# kubectl edit rs myapp
replicaset.extensions/myapp edited      # 这里打开后,把replicas的值变更为3,也就是增加一个后。
[root@master manifests]# kubectl get pods -o wide
NAME          READY   STATUS    RESTARTS   AGE   IP            NODE                NOMINATED NODE   READINESS GATES
myapp-p9dk9   1/1     Running   0          14m   10.244.3.14   node01.kubernetes   <none>           <none>
myapp-vjnlq   1/1     Running   0          8s    10.244.1.14   node03.kubernetes   <none>           <none>
myapp-wl9nh   1/1     Running   0          10m   10.244.2.10   node02.kubernetes   <none>           <none>      # 这里立刻就会再次创建一个Pod 出来
道理相同,减少replicas的数量的时候,Pod对应的数量也会减少,、
也可以直接编辑对应的Pod版本,只是编辑后不会自动更新,但当手动删除一个Pod的时候,再次自动创建新的Pod的时候,就会更新清单中的Pod。
[root@master manifests]# kubectl edit rs myapp
replicaset.extensions/myapp edited
[root@master manifests]# kubectl get rs -o wide
NAME    DESIRED   CURRENT   READY   AGE   CONTAINERS        IMAGES                 SELECTOR
myapp   3         3         3       15m   myapp-container   ikubernetes/myapp:v2   app=myapp,release=canary
[root@master manifests]# curl 10.244.3.14
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@master manifests]# kubectl delete pods myapp-p9dk9
pod "myapp-p9dk9" deleted
[root@master manifests]# kubectl get pods -o wide
NAME          READY   STATUS    RESTARTS   AGE     IP            NODE                NOMINATED NODE   READINESS GATES
myapp-fwnwz   1/1     Running   0          14s     10.244.3.15   node01.kubernetes   <none>           <none>
myapp-vjnlq   1/1     Running   0          2m33s   10.244.1.14   node03.kubernetes   <none>           <none>
myapp-wl9nh   1/1     Running   0          12m     10.244.2.10   node02.kubernetes   <none>           <none>
[root@master manifests]# curl 10.244.3.15
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>

Based on this strategy, you can achieve 蓝绿release specific implementation in the following figure:

ReplicaSet achieve blue-green map released

Deployment controller

Here direct declarative creation, use the kubectl applycommand

Start by creating a list

root@master manifests]# cat deploy-demo.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-deploy
  namespace: default
spec:
  replicas: 2   # 副本数为 2
  selector:
    matchLabels:
      app: myapp
      release: canary
  template:
    metadata:
      labels:
        app: myapp
        release: canary
    spec:
      containers:
      - name: myapp
        image: ikubernetes/myapp:v1
        ports:
        - name: http
          containerPort: 80
[root@master manifests]# kubectl apply -f deploy-demo.yaml 
deployment.apps/myapp-deploy created

View:

root@master manifests]# kubectl get pods
NAME                           READY   STATUS    RESTARTS   AGE
myapp-deploy-f4db5d79c-md2kk   1/1     Running   0          4s
myapp-deploy-f4db5d79c-wwx5n   1/1     Running   0          4s
[root@master manifests]# kubectl get deploy
NAME           READY   UP-TO-DATE   AVAILABLE   AGE
myapp-deploy   2/2     2            2           11s
[root@master manifests]# kubectl get rs
NAME                     DESIRED   CURRENT   READY   AGE
myapp-deploy-f4db5d79c   2         2         2       43s
  1. 2 Pod to create a successful and running
  2. Deployment normal
  3. Deployment will automatically create a ReplicaSet, by managing ReplicaSet, ReplicaSet go to manage Pod.
  4. Pod Deployment name is the name plus hash value ReplicaSet rearmost production, and talking on the composition of their value Hash

View deployment details

root@master manifests]# kubectl get deploy
NAME           READY   UP-TO-DATE   AVAILABLE   AGE
myapp-deploy   3/3     3            3           2m34s
[root@master manifests]# kubectl describe deploy myapp-deploy
Name:                   myapp-deploy
Namespace:              default
CreationTimestamp:      Tue, 23 Jul 2019 17:05:49 +0800
Labels:                 <none>
Annotations:            deployment.kubernetes.io/revision: 1        # 每次的变化,都会被保存在 Annotations当中
                        kubectl.kubernetes.io/last-applied-configuration:
                          {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"name":"myapp-deploy","namespace":"default"},"spec":{"replicas":3...
Selector:               app=myapp,release=canary
Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate       # 默认的更新策略,滚动更新
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge  # 最大不可用是25%,最大可用是25%,不足一个的时候,补足一个
Pod Template:
  Labels:  app=myapp
           release=canary
  Containers:
   myapp:
    Image:        ikubernetes/myapp:v1
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Progressing    True    NewReplicaSetAvailable
  Available      True    MinimumReplicasAvailable
OldReplicaSets:  <none>
NewReplicaSet:   myapp-deploy-f4db5d79c (3/3 replicas created)
Events:
  Type    Reason             Age    From                   Message
  ----    ------             ----   ----                   -------
  Normal  ScalingReplicaSet  3m15s  deployment-controller  Scaled up replica set myapp-deploy-f4db5d79c to 2
  Normal  ScalingReplicaSet  50s    deployment-controller  Scaled up replica set myapp-deploy-f4db5d79c to 3

Rollover test

In a window onto the monitor tag app = myapp Pod

[root@master manifests]# kubectl get pods -l app=myapp -w       # 这里执行后,会一直等待不动,知道有变化才会有更新
NAME                           READY   STATUS    RESTARTS   AGE
myapp-deploy-f4db5d79c-cvvl6   1/1     Running   0          2m20s
myapp-deploy-f4db5d79c-md2kk   1/1     Running   0          4m45s
myapp-deploy-f4db5d79c-wwx5n   1/1     Running   0          4m45s

In another window, re-edit the list of deploy-demo.yamlthe number of copies the file to 5

[root@master manifests]# cat deploy-demo.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-deploy
  namespace: default
spec:
  replicas: 5       # 这里副本数修改为 5
  selector:
    matchLabels:
      app: myapp
      release: canary
  template:
    metadata:
      labels:
        app: myapp
        release: canary
    spec:
      containers:
      - name: myapp
        image: ikubernetes/myapp:v2
        ports:
        - name: http
          containerPort: 80

Then directly use kubectl applythe command declarative approach to change

[root@master manifests]# kubectl apply -f deploy-demo.yaml 
deployment.apps/myapp-deploy configured

When the execution is complete, the monitoring window will appear change

[root@master manifests]# kubectl get pods -l app=myapp -w
NAME                           READY   STATUS    RESTARTS   AGE
myapp-deploy-f4db5d79c-cvvl6   1/1     Running   0          2m20s
myapp-deploy-f4db5d79c-md2kk   1/1     Running   0          4m45s
myapp-deploy-f4db5d79c-wwx5n   1/1     Running   0          4m45s
myapp-deploy-f4db5d79c-cvvl6   1/1     Terminating   0          3m26s
myapp-deploy-55b78d8548-zsl5d   0/1     Pending       0          0s
myapp-deploy-55b78d8548-zsl5d   0/1     Pending       0          0s
myapp-deploy-55b78d8548-zsl5d   0/1     ContainerCreating   0          0s
myapp-deploy-f4db5d79c-cvvl6    0/1     Terminating         0          3m27s
myapp-deploy-55b78d8548-zsl5d   1/1     Running             0          1s
myapp-deploy-f4db5d79c-md2kk    1/1     Terminating         0          5m52s
myapp-deploy-55b78d8548-l264b   0/1     Pending             0          0s
myapp-deploy-55b78d8548-l264b   0/1     Pending             0          0s
myapp-deploy-55b78d8548-l264b   0/1     ContainerCreating   0          0s
myapp-deploy-f4db5d79c-cvvl6    0/1     Terminating         0          3m28s
myapp-deploy-f4db5d79c-cvvl6    0/1     Terminating         0          3m28s
myapp-deploy-f4db5d79c-md2kk    0/1     Terminating         0          5m53s
myapp-deploy-55b78d8548-l264b   1/1     Running             0          2s
myapp-deploy-f4db5d79c-wwx5n    1/1     Terminating         0          5m54s
myapp-deploy-f4db5d79c-wwx5n    0/1     Terminating         0          5m54s
myapp-deploy-f4db5d79c-md2kk    0/1     Terminating         0          6m1s
myapp-deploy-f4db5d79c-md2kk    0/1     Terminating         0          6m1s
myapp-deploy-f4db5d79c-wwx5n    0/1     Terminating         0          6m4s
myapp-deploy-f4db5d79c-wwx5n    0/1     Terminating         0          6m4s

At this point to View

[root@master manifests]# kubectl get pods
NAME                            READY   STATUS    RESTARTS   AGE
myapp-deploy-55b78d8548-cw4r5   1/1     Running   0          5s
myapp-deploy-55b78d8548-jwj2j   1/1     Running   0          5s
myapp-deploy-55b78d8548-l264b   1/1     Running   0          4m20s
myapp-deploy-55b78d8548-mnm95   1/1     Running   0          5s
myapp-deploy-55b78d8548-zsl5d   1/1     Running   0          4m21s

View ReplicaSet

[root@master manifests]# kubectl get rs -o wide
NAME                      DESIRED   CURRENT   READY   AGE     CONTAINERS   IMAGES                 SELECTOR
myapp-deploy-55b78d8548   5         5         5       59s     myapp        ikubernetes/myapp:v2   app=myapp,pod-template-hash=55b78d8548,release=canary
myapp-deploy-f4db5d79c    0         0         0       6m50s   myapp        ikubernetes/myapp:v1   app=myapp,pod-template-hash=f4db5d79c,release=canary

After a review of ReplicaSet found to have two, because Deployment will re-create a ReplicaSet to update, IMAGES version one can see different versions of the rollover achieved through the creation of new Pod in a new ReplicaSet up

DaemonSet controller

DaemonSet 控制器主要用于系统级别的无状态环境。
默认创会在 kubernetes 集群中每一个节点创建一个Pod。

Creating a configuration list DaemonSet

[root@master manifests]# cat daemonset-demo.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: myapp-ds
  namespace: default
spec:
  selector:
    matchLabels:
      app: filebeat
      release: stable
  template:
    metadata:
      labels:
        app: filebeat
        release: stable
    spec:
      containers:
      - name: filebeat
        image: ikubernetes/filebeat:5.6.5-alpine    # 这里使用filebeat来进行测试
        env:                                        # 增加两个环境变量
        - name: REDIS_HOST                          # 指定redis地址
          value: redis.default.svc.cluster.local    # 这里使用集群域名来解析redis地址
        - name: REDIS_LOG_LEVEL                     # 指定日志级别
          value: info                               # 日志级别为info
[root@master manifests]# kubectl apply -f daemonset-demo.yaml 
daemonset.apps/myapp-ds created
[root@master manifests]# kubectl get pods -o wide
NAME                           READY   STATUS    RESTARTS   AGE   IP            NODE                NOMINATED NODE   READINESS GATES
myapp-ds-jxwfq                 1/1     Running   0          9s    10.244.1.22   node03.kubernetes   <none>           <none>
myapp-ds-mw9vw                 1/1     Running   0          9s    10.244.3.25   node01.kubernetes   <none>           <none>
myapp-ds-xjqdb                 1/1     Running   0          9s    10.244.2.18   node02.kubernetes   <none>           <none>

We can see has been created, and each node is only a Pod.

check the detail information

[root@master manifests]# kubectl describe ds myapp-ds
Name:           myapp-ds
Selector:       app=filebeat,release=stable
Node-Selector:  <none>
Labels:         <none>
Annotations:    deprecated.daemonset.template.generation: 1
                kubectl.kubernetes.io/last-applied-configuration:
                  {"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{},"name":"myapp-ds","namespace":"default"},"spec":{"selector":{"matc...
Desired Number of Nodes Scheduled: 3
Current Number of Nodes Scheduled: 3
Number of Nodes Scheduled with Up-to-date Pods: 3
Number of Nodes Scheduled with Available Pods: 3
Number of Nodes Misscheduled: 0
Pods Status:  3 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=filebeat
           release=stable
  Containers:
   filebeat:
    Image:      ikubernetes/filebeat:5.6.5-alpine
    Port:       <none>
    Host Port:  <none>
    Environment:
      REDIS_HOST:       redis.default.svc.cluster.local
      REDIS_LOG_LEVEL:  info
    Mounts:             <none>
  Volumes:              <none>
Events:
  Type    Reason            Age   From                  Message
  ----    ------            ----  ----                  -------
  Normal  SuccessfulCreate  22m   daemonset-controller  Created pod: myapp-ds-qjg6w
  Normal  SuccessfulCreate  22m   daemonset-controller  Created pod: myapp-ds-ftxzb
  Normal  SuccessfulCreate  22m   daemonset-controller  Created pod: myapp-ds-66hnj

Similarly, a configuration list, you can put more resources in to write a list, use ---to split

In the same manifest file, and then define a redis of Deployment

[root@master manifests]# cat daemonset-demo.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: logstor
  template:
    metadata:
      labels:
        app: redis
        role: logstor
    spec:
      containers:
      - name: redis
        image: redis:4.0-alpine
        ports:
        - name: redis
          containerPort: 6379
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: myapp-ds
  namespace: default
spec:
  selector:
    matchLabels:
      app: filebeat
      release: stable
  template:
    metadata:
      labels:
        app: filebeat
        release: stable
    spec:
      containers:
      - name: filebeat
        image: ikubernetes/filebeat:5.6.5-alpine
        env:
        - name: REDIS_HOST
          value: redis.default.svc.cluster.local
        - name: REDIS_LOG_LEVEL
          value: info

Used again kubectl applyto perform this manifest file

[root@master manifests]# kubectl apply -f daemonset-demo.yaml 
deployment.apps/redis created
daemonset.apps/myapp-ds created
[root@master manifests]# kubectl get pods
NAME                           READY   STATUS    RESTARTS   AGE
myapp-ds-66hnj                 1/1     Running   0          59s
myapp-ds-ftxzb                 1/1     Running   0          59s
myapp-ds-qjg6w                 1/1     Running   0          59s
redis-5c998b644f-wnzrd         1/1     Running   0          59s

Creating a service of redis

[root@master manifests]# kubectl expose deployment redis --port=6379
service/redis exposed
[root@master manifests]# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP    15d
redis        ClusterIP   10.111.151.200   <none>        6379/TCP   4s

Into redis pod and try to resolve redis

[root@master manifests]# kubectl exec -it redis-5c998b644f-wnzrd -- /bin/sh
/data # netstat -tnl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       
tcp        0      0 0.0.0.0:6379            0.0.0.0:*               LISTEN      
tcp        0      0 :::6379                 :::*                    LISTEN      
/data # ls
/data # nslookup redis.default.svc.cluster.local
nslookup: can't resolve '(null)': Name does not resolve

Name:      redis.default.svc.cluster.local
Address 1: 10.111.151.200 redis.default.svc.cluster.local       # 可以看到解析正常

Similarly, DaemonSet also support rolling updates, but of a different Deployment time, DaemonSet only supports new again after deleting Pod, to update

There are several ways to update, here we use kubectl set imageto update

[root@master manifests]# kubectl set image daemonset myapp-ds filebeat=ikubernetes/filebeat:5.6.6-alpine
daemonset.extensions/myapp-ds image updated
[root@master manifests]# kubectl get pods -o wide -w
NAME                           READY   STATUS              RESTARTS   AGE   IP            NODE                NOMINATED NODE   READINESS GATES
myapp-deploy-f4db5d79c-7hnfg   1/1     Running             0          22h   10.244.2.16   node02.kubernetes   <none>           <none>
myapp-deploy-f4db5d79c-85hpm   1/1     Running             0          22h   10.244.3.22   node01.kubernetes   <none>           <none>
myapp-deploy-f4db5d79c-b9h4s   1/1     Running             0          22h   10.244.2.15   node02.kubernetes   <none>           <none>
myapp-deploy-f4db5d79c-tm9mt   1/1     Running             0          22h   10.244.1.20   node03.kubernetes   <none>           <none>
myapp-deploy-f4db5d79c-xp8t6   1/1     Running             0          22h   10.244.3.23   node01.kubernetes   <none>           <none>
myapp-ds-8tvmc                 0/1     ContainerCreating   0          10s   <none>        node03.kubernetes   <none>           <none>
myapp-ds-f2pp8                 1/1     Running             0          30s   10.244.2.20   node02.kubernetes   <none>           <none>
myapp-ds-ftxzb                 1/1     Running             0          25m   10.244.3.26   node01.kubernetes   <none>           <none>
redis-5c998b644f-wnzrd         1/1     Running             0          25m   10.244.1.24   node03.kubernetes   <none>           <none>
myapp-ds-8tvmc                 1/1     Running             0          14s   10.244.1.25   node03.kubernetes   <none>           <none>
myapp-ds-ftxzb                 1/1     Terminating         0          25m   10.244.3.26   node01.kubernetes   <none>           <none>
myapp-ds-ftxzb                 0/1     Terminating         0          25m   10.244.3.26   node01.kubernetes   <none>           <none>
myapp-ds-ftxzb                 0/1     Terminating         0          25m   10.244.3.26   node01.kubernetes   <none>           <none>
myapp-ds-ftxzb                 0/1     Terminating         0          25m   10.244.3.26   node01.kubernetes   <none>           <none>
myapp-ds-cs2hw                 0/1     Pending             0          0s    <none>        <none>              <none>           <none>
myapp-ds-cs2hw                 0/1     Pending             0          0s    <none>        node01.kubernetes   <none>           <none>
myapp-ds-cs2hw                 0/1     ContainerCreating   0          0s    <none>        node01.kubernetes   <none>           <none>
myapp-ds-cs2hw                 1/1     Running             0          17s   10.244.3.27   node01.kubernetes   <none>           <none>

Here it will be a an update, download the update image Pod start after deletion.

Guess you like

Origin www.cnblogs.com/winstom/p/11243090.html