Three, pod kubernetes of the schedule (a)

scheduling pod in two ways, one is through the Pod controller, by providing another property own resources Pod

Pod role of controller
Pod article created directly on Pod controller is not management, it is difficult to implement some of the features of K8S.

kubelet is K8s cluster node agents that are running on one instance of each working node. When the working node fails, kubelet also unavailable, pod no longer be able to restart the kubelet. At this time, the viability of the Pod Pod generally guaranteed by the controller outside the working node.

In K8S ideal mode of operation should be the controller manages Pod Pod, Pod guarantee required resources to work with user-defined.

Pod Controller Type Comments
ReplicaSet (abbreviated rs)
Is to improve the models replicationController, the Pod can guarantee the normal operation (restart after a failure or rebuild), to ensure the normal number (Duotuishaobu) Pod and supports Pod scalable capacity.
Assembling of the support ReplicaSet selector (ReplicationController only supports Equation)

There are three main building blocks:

  1. Replicas : Specifies the number of copies desired by the user
  2. Selector : tag selector, which determines administrative control Pod
  3. Template : Pod resource template for creating a new Pod, the template defines the way see the resource definition pod

Creating ReplicaSet
NGX-relicatSet-YAML reads as follows

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: frontend
  labels:
    app: ngx-rs
    tier: frontend
spec:
  replicas: 3   ##指定总共需要3个pod
  selector:     ##通过label选择接受管理的pod
    matchLabels:
      tier: frontend
  template:    ##依据此模板生成Pod
    metadata:
      labels:
        tier: frontend
    spec:
      containers:
      - name: ngxv2
        image: 192.168.80.146:5000/my_ngx:v2

Submit this file to kubernetes cluster to generate the corresponding pod manager ReplicaSet its management Pod

[root@k8s-master k8s-yaml]# kubectl create -f ngx-relicatSet-yaml

View the status of Pod Manager

[root@k8s-master k8s-yaml]# kubectl get rs frontend
NAME       DESIRED   CURRENT   READY     AGE
frontend   3         3         3         10m
[root@k8s-master k8s-yaml]# kubectl describe rs/frontend
Name:         frontend
Namespace:    default
Selector:     tier=frontend
Labels:       app=ngx-rs
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  3 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  tier=frontend
  Containers:
   ngxv2:
    Image:        192.168.80.146:5000/my_ngx:v2
    Port:         <none>
    Host Port:    <none>
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Events:
  Type    Reason            Age   From                   Message
  ----    ------            ----  ----                   -------
  Normal  SuccessfulCreate  13m   replicaset-controller  Created pod: frontend-6z7wn
  Normal  SuccessfulCreate  13m   replicaset-controller  Created pod: frontend-dp9xv
  Normal  SuccessfulCreate  11m   replicaset-controller  Created pod: frontend-v8dkp

Pod Pod verify the relationship manager and the non-template created
previously created controller replicaSet (frontend), now create a Pod1 (its label in line with management's selectot), observe Pod1 situation.
Pod1 of yaml list is as follows:

apiVersion: v1
kind: Pod
metadata:
  name: pod1
  labels:
    tier: frontend  ##此label符合管理器replicaSet中的selector要求,故会受控制
spec:
  containers:
  - name: hello1
    image: 192.168.80.146:5000/my_ngx:v2  

Pod ReplicaSet new acquisition will be, and unmanaged; because it will exceed the required number Pod, it will terminate the new Pod1.

[root@k8s-master k8s-yaml]# kubectl get pod -w
NAME             READY     STATUS        RESTARTS   AGE
frontend-6z7wn   1/1       Running       0          24m
frontend-dp9xv   1/1       Running       0          24m
frontend-v8dkp   1/1       Running       0          23m
pod1             0/1       Terminating   0          5s

? Note: If you create Pod1, then create ReplicaSet; it is only based on the template to generate two Pod when it Pod1 will remain operational, create ReplicaSet

Remove from ReplicaSet in a Pod
is to edit the label Pod; manager will automatically create a new alternative Pod.

[root@k8s-master k8s-yaml]# kubectl label pods frontend-6z7wn tier=test --overwrite  
pod/frontend-6z7wn labeled
[root@k8s-master k8s-yaml]# kubectl get  pods --show-labels            
NAME             READY     STATUS              RESTARTS   AGE       LABELS
frontend-6z7wn   1/1       Running             0          2h        tier=test
frontend-c9h5l   0/1       ContainerCreating   0          5s        tier=frontend
frontend-dp9xv   1/1       Running             0          2h        tier=frontend
frontend-v8dkp   1/1       Running             0          2h        tier=frontend

Modify the number of copies set
by simply updating the manifest file .spec.replicas field to expand or shrink ReplicaSet.

[root@k8s-master k8s-yaml]# kubectl apply -f ngx-relicatSet-yaml 
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
replicaset.apps/frontend configured
[root@k8s-master k8s-yaml]# kubectl get  pods --show-labels      
NAME             READY     STATUS    RESTARTS   AGE       LABELS
frontend-6z7wn   1/1       Running   0          2h        tier=test
frontend-dp9xv   1/1       Running   0          2h        tier=frontend
frontend-v8dkp   1/1       Running   0          2h        tier=frontend

Unless you need a custom update or no update schedule, otherwise we recommend the use of Deployment; rather than directly using ReplicaSet.

Deployment
Based on ReplicaSet controllers manage the Pod by controlling RepilicaSet; ReplicaSet to provide declarative Pod and updates.

Deployment is described in the desired state, the controller will gradually Deployment actual update to the desired state. By defining Deployment to create a new ReplicaSet, or delete an existing Deployment and use the new Deployment take over all the resources.

Creating Deployment

A deployment list

apiVersion: apps/v1
kind: Deployment
metadata:
   name: ngx-deployment  ##定义一个名为ngx-deployment的deployment
   labels:
      app: ngx
spec:
   replicas: 3   ##定义有三个Pod
   selector:
      matchLabels:  ##定义找到label为app:ngx的pod
         app: ngx
   template:   ##定义pod的模板
      metadata:
         labels:
           app: ngx ##定义pod的label为app:ngx
      spec:
         containers:
         - name: ngxv2
           image: 192.168.80.146:5000/my_ngx:v2

Creating this deployment

[root@k8s-master k8s-yaml]# kubectl apply -f ngx-deployment.yaml 
deployment.apps/ngx-deployment created
[root@k8s-master k8s-yaml]# kubectl get deployments
NAME             DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
ngx-deployment   3         3         3            3           28s
[root@k8s-master k8s-yaml]# kubectl get rs
NAME                        DESIRED   CURRENT   READY     AGE
ngx-deployment-559486d8fd   3         3         3         12m
[root@k8s-master k8s-yaml]# kubectl get pods
NAME                              READY     STATUS    RESTARTS   AGE
ngx-deployment-559486d8fd-4c49k   1/1       Running   0          13m
ngx-deployment-559486d8fd-hfht7   1/1       Running   0          13m
ngx-deployment-559486d8fd-pk967   1/1       Running   0          13m

Created by the deployment rs its name, in the name of deployment of the head.

Update the container Pod

When the pod and only if the deployment template (ie .spec.template) changes to trigger rollover Deployment of
example, modify the pod container version by reduced v1 v2

[root@k8s-master k8s-yaml]# kubectl edit deployment.v1.apps/ngx-deployment  
"/tmp/kubectl-edit-ltme5.yaml" 69L, 2277C written

Note: do not change ngx-deployment.yaml, but creates a new temporary file delivery application to kubectl

Observation rollover progress

[root@k8s-master k8s-yaml]# kubectl rollout status deployment.v1.apps/ngx-deployment
Waiting for deployment "ngx-deployment" rollout to finish: 2 out of 3 new replicas have been updated...

Waiting for deployment "ngx-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "ngx-deployment" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "ngx-deployment" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "ngx-deployment" rollout to finish: 1 old replicas are pending termination...
deployment "ngx-deployment" successfully rolled out

Changes observed after the update, the old rs are retained but run only new rs and new pod

[root@k8s-master k8s-yaml]# kubectl get rs
NAME                        DESIRED   CURRENT   READY     AGE
ngx-deployment-559486d8fd   0         0         0         35m
ngx-deployment-58d847f49c   3         3         3         13m
[root@k8s-master k8s-yaml]# kubectl get deployment
NAME             DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
ngx-deployment   3         3         3            3           35m
[root@k8s-master k8s-yaml]# kubectl get pods
NAME                              READY     STATUS    RESTARTS   AGE
ngx-deployment-58d847f49c-fjsgr   1/1       Running   0          15m
ngx-deployment-58d847f49c-sjqrq   1/1       Running   0          15m
ngx-deployment-58d847f49c-znw5d   1/1       Running   0          15m

deployment可以确保在更新时只有一定数量的Pod可能会关闭。默认情况下,它确保最多不可用Pod为所需数量的25%;还确保在所需数量的Pod之上只能创建一定数量的Pod。默认情况下,它确保最多比所需数量的Pod多25%

For example, if you look closely above the deployment, you will see it first creates a new Pod, and then delete some old Pod and create a new Pod. Before there is a sufficient number of new Pod appears, it will not kill the old Pod, and does not create a new Pod until a sufficient number of old Pod killed. It ensures that the number of available Pod at least 2, and the total number of up to 4 Pod

Rollover
rollover before the first view of history, and the specific updates

[root@k8s-master k8s-yaml]# kubectl rollout history deployment.v1.apps/ngx-deployment
deployments "ngx-deployment"
REVISION  CHANGE-CAUSE
3         <none>
4         <none>
[root@k8s-master k8s-yaml]# kubectl rollout history deployment.v1.apps/ngx-deployment --revision=3
deployments "ngx-deployment" with revision #3
Pod Template:
  Labels:       app=ngx
        pod-template-hash=1150428498
  Containers:
   ngxv2:
    Image:      192.168.80.146:5000/my_ngx:v2
    Port:       <none>
    Host Port:  <none>
    Environment:        <none>
    Mounts:     <none>
  Volumes:      <none>
[root@k8s-master k8s-yaml]# kubectl rollout history deployment.v1.apps/ngx-deployment --revision=4
deployments "ngx-deployment" with revision #4
Pod Template:
  Labels:       app=ngx
        pod-template-hash=1484039057
  Containers:
   ngxv2:
    Image:      192.168.80.146:5000/my_ngx:v1
    Port:       <none>
    Host Port:  <none>
    Environment:        <none>
    Mounts:     <none>
  Volumes:      <none>

Roll back to the specified version

[root@k8s-master k8s-yaml]# kubectl rollout undo deployment.v1.apps/ngx-deployment --to-revision=3
deployment.apps/ngx-deployment

Extended Update

命令扩展部署
kubectl scale deployment.v1.apps/ngx-deployment --replicas=6
启用自动扩缩
kubectl autoscale deployment.v1.apps/nginx-deployment --min=5 --max=10 --cpu-percent=80

Pause and resume deployment

You can suspend deployment, and then restore it. During the pause, updates to the deployment will not be executed immediately, wait for recovery deployment.

kubectl rollout pause deployment.v1.apps/ngx-deployment
kubectl set image deployment.v1.apps/ngx-deployment nginx=nginx:1.9.1
kubectl rollout resume deployment.v1.apps/ngx-deployment

Other common field deployment instructions
.spec.revisionHistoryLimit to specify this deployment you want to keep the old ReplicaSet number. The rest of the garbage collection in the background. By default, it is 10. This field explicitly set to 0 will cause all history cleaning deployed, so deployment will not be rolled back.

.spec.strategyPod designated for the replacement of the old policy; Recreate all existing Pod will be killed before creating a new Pod; RollingUpdate the rollover Pod, you can specify maxUnavailable maxSurge control and rollover process.
maxUnavailable, the update process is not quantifiable up, the value can be absolute (e.g., 5) or Pod desired percentage (e.g., 10%
maxSurge, the update process can create the maximum number of Pod. This value can be an absolute number (e.g., 5) Pod or desired percentage (e.g., 10%)

rs plurality deployment management control, but only to ensure that a present operating state of rs, the presence of other cold standby state. The gray updated logical sequence is as follows:
Here Insert Picture Description

DaemonSet
Ensure that all nodes (or some nodes specified) runs a specified Pod. The new node will automatically run the specified pod; node exits all Pod is recovered; deleted a DaemonSet, that he created Pod will be cleared.
Node level commonly used in applications such as:

  1. Run clustered storage daemon, e.g. glusterd running on each node, ceph.
  2. Running on each node to collect log daemon, e.g. fluentd, logstash.
  3. Each node in the performance monitoring daemon, e.g. Prometheus Node, Exporter, collectd

Two modules:
Selector : tag selector, which is determined to accept the management control Pod
Template : Pod resource template for creating a new Pod, the template defines the way see the resource definition pod

Pod run only in certain nodes

  1. .Spec.template.spec.nodeSelector specified field, DaemonSet Controller Node will be able to
    create matching nodes on Pod Selector.
  2. .Spec.template.spec.affinity specified field, DaemonSet Controller Node Affinity will be capable of
    creating a matching node Pod.

If you did not specified, DaemonSet Controller will create Pod on all nodes.

Update Policy
DaemonSet update policy, there are two types:

OnDelete: This is the default update policy for backward compatibility. It will create a new DaemonSet pod when using OnDelete update policy, after the update DaemonSet template, only manually delete the old DaemonSet pod. This is the same Kubernetes version 1.5 or earlier in DaemonSet behavior.

RollingUpdate: Use RollingUpdate update strategy, after the update DaemonSet template, old DaemonSet pod will be terminated in a controlled manner and will automatically create a new DaemonSet pod.

Job
Pod for managing the implementation of a one-time tasks, normal tasks on normal exit Pod, self-healing does not occur

apiVersion: batch/v1
kind: Job
metadata:
  name: pi
spec:
  template:
    spec:
      containers:
      - name: pi
        image: perl
        command: ["perl",  "-Mbignum=bpi", "-wle", "print bpi(2000)"]
      restartPolicy: Never
  backoffLimit: 4

Cronjob
Or periodically run the task at a given point of time to run only once Pod

StatefullSet
Applicable to the management stateful Pod, there are some initialization process. pod Deployment created are stateless, when mounted the Volume, if the pod hung up, RS will rebuild a pod, because Deployment is no reason the state will be unable to mount the volume again.

In order to solve such problems, introduction of the state information for retaining the Pod StatefulSet, scenarios comprising:

  • 1, persistent storage stable, i.e. after the rescheduling Pod or access to the same persistent data, based on the PVC to achieve
    two stable network identifier, i.e. the Pod and reschedule its PodName HostName unchanged, based Headless Service ( i.e. no ClusterIP of-Service) achieved
    3, orderly deployment, orderly expansion, i.e. Pod are ordered, when deployed or extended according to the order defined sequentially sequentially (i.e. from 0 to N-1, the next before Pod Pod must run before all is running and Ready status), based on initcontainers to achieve
    4, orderly contraction and orderly deleted (that is, from N-1 Dao 0)
    5 and orderly rollover (version compatible, first update from node)

StatefulSet consists of the following components:

  1. Headless Service (headless service) can be used to generate a resource identifier resolve the DNS records for the Pod, let DNS directly to the pod. Thus, the flow rate of a direct hit on the domain name in the pod.
  2. volumeClaimTemplates storage volume application template (template created pvc, pvc generates a storage volume for each POD), to provide a specific fixed Pod storage resources based on static or dynamic mode supply PV (Pod distributed applications, different storage volumes with one another ). But you can only use the same storage volumes in a pod template.
  3. StatefulSet, Pod for resource control.

注意:StatefulSet并不能简单的适用于所有有状态的应用,仍需要用户自定义脚本处理相应的应用

Delete statefulset does not delete pvc, its management Pod will re-establish relationships based on volumeClaimTemplates pvc name and restart statefulset.

In k8s cluster domain format:
pod_name.service_name.ns_name .svc.cluster.localmarked red part is fixed

Updated partition, specifies the partition number N, the serial number is greater than or equal to N sts the pod will be updated

Published 40 original articles · won praise 2 · Views 2100

Guess you like

Origin blog.csdn.net/weixin_42155272/article/details/90210336