K8S Pod controller explanation

Table of contents

1. Pod controller category

1、ReplicaSet

2、Deployment

3、DaemonSet

4、Job

5、CronJob

6、Statuful Set

7、CDR

8、Helm

Two, ReplicaSet resource list

3. Deployment resource list

1. strategy (Pod update strategy)

2、revisionHistoryLimit

3、paused

4、template

5. Deployment resource list example

5.1. Update operation

5.2. Update the resource list configuration by patching

5.3. Suspend Pod update

5.4. Resume Pod update

5.5. Rollback operation

4. DaemonSet resource list


1. Pod controller category

1、ReplicaSet

The ReplicaSet controller is used to manage stateless Pod resources . Its core function is to create a specified number of Pod copies on behalf of the user and ensure that the number of Pod copies is always equal to the number expected by the user. It also supports mechanisms such as Pod rolling update and automatic scaling; it is called a new generation of ReplicationCtroller.

ReplicaSet mainly consists of three components:

  1. The number of Pod replicas expected by the user;
  2. The label selector is used to select Pod copies managed or controlled by itself. If the number of Pods selected through the label selector is less than the defined number of Pod copies, the Pod resource template will be used to create Pod copies to achieve the specified Pod quantity;
  3. Pod resource template.

But Kubernetes does not recommend that users use ReplicaSet directly, but should use Deployment .

2、Deployment

Deployment is also a Pod controller, but it works on ReplicaSet. Deployment controls Pod by controlling ReplicaSet. Deployment can provide more powerful functions than ReplicaSet. For example: Pod version rollback, declarative configuration (declarative configuration means that the created Pod can change the configuration at any time and apply it to the Pod). Deployment is currently the best controller for managing stateless applications.

3、DaemonSet

DaemonSet is used to ensure that each node in the cluster only runs a specific Pod copy , which is usually used to implement system-level background tasks. The advantage of hosting such a task on Kubernetes is that if the background task goes down, a Pod will be automatically rebuilt by the DaemonSet controller; if a new Node node is created, it will also create such a Pod to run on the new node.

Constraints: We can also run only one Pod copy on some nodes in the K8S cluster that meet the conditions according to our own needs.

Summary: The services running in the Pods managed by Deployment and DaemonSet are stateless and daemon-like (must always run continuously in the background). But for the kind of tasks that only want to run once and end, obviously the above two controllers cannot be used. For example: we need to back up the database. After the backup is over, the task should end instead of letting the task continue to run in the background. Then, for this kind of task, it only runs once. As long as the task is completed, it will exit normally, and it will be rebuilt if it is not completed. This is why you should choose to use a controller like Job .

4、Job

The Job controller controls the task that can only execute a job once . It ensures that the task is indeed completed normally and exits. If it exits abnormally, the Job controller will rebuild the task and execute it again until the task is completed normally.

What about periodic tasks? Obviously the Job controller is also incapable. This requires CronJob .

5、CronJob

CronJob is similar to Job, it also exits after running once. The difference is that Job runs only once, while CronJob runs periodically . But each run will have a normal exit time. What if the execution of the previous task has not been completed, and it is time for the next task execution? CronJob can also solve this kind of problem.

6、Statuful Set

The StatufulSet controller can manage stateful Pod copies, and each Pod copy is managed separately, with its own unique flag and unique data set. Once this replica fails, many initialization operations will be done before rebuilding the Pod.

Take the Redis cluster as an example: if one of the three nodes in the Redis cluster goes down, in order to ensure that the data is not lost after each node goes down, our traditional method is to perform master-slave replication for each node. After the shutdown, it is necessary to manually promote the slave node to be the master node. If you want to restore the slave node, you need a lot of operation and maintenance operations.

But using StatufulSet to define and manage Redis or Mysql or Zookeeper, their configurations are different. For example: the operation steps between configuration management Redis master-slave replication and configuration management Mysql master-slave replication are different. So there is no rule to follow in such a configuration. StatufulSet provides us with a package. Users define complex execution logic that requires manual operations as scripts and place them in the Pod template of StatufulSet. In this way, after each Pod node failure, it can be automatically restored through the script.

Summary: It is still quite difficult to really want to host stateful applications on Kubernetes.

7、CDR

Custom Defined Resources K8S 1.8+, user-defined resources.

8、Helm

Any app that doesn't treat users like idiots is unlikely to succeed. The definition of the K8S resource list is too high and difficult. This gave birth to Helm. For Kubernetes, Helm is equivalent to yum in the Linux system . In the future, when deploying large-scale applications, you can use Helm to install and deploy directly. However, Helm has not been born for more than two years so far. So far, many large and mainstream applications can be deployed through Helm.

Two, ReplicaSet resource list

First-level fields when the ReplicaSet resource list is defined:

apiVersion: apps/v1
kind: ReplicaSet
metadata: 
spec: 

There are three main core fields in the lower level of spec:

replicas    <integer>       # 定义Pod副本数量
selector    <Object>        # 选择器
template    <Object>        # 模板

Nested in the template field is the field defined by the Pod resource list:

metadata    <Object>        # 这是定义Pod的元数据
spec        <Object>        # Pod的spec

Next, let's define a Resource List for ReplicaSet: rs-demo.yaml

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: myapp
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: myapp
      release: canary
  template:
    metadata:
      name: myapp-pod
      labels:
        app: myapp
        release: canary
        environment: qa
    spec:
      containers:
      - name: myapp-container
        image: ikubernetes/myapp:v1
        ports:
        - name: http
          containerPort: 80

Notes see the picture below:

Create a ReplicaSet called myapp using the above listing:

kubectl create -f rs-demo.yaml

View the successfully created ReplicaSet:

ReplicaSet can be abbreviated as rs.

It is worth noting that the Pod name defined in the Pod template (as in the code above: name: myapp-pod) actually has no effect. Because the Pod name created through the controller manifest file is 控制器名称-随机字符串named according to this form. As shown below:

Get the created Pod:

After updating the image of the Pod container, the new version of the Pod update diagram:

3. Deployment resource list

The number of ReplicaSet versions retained by Deployment management can be customized by the user, and 10 historical versions are retained by default. Deployment can use declarative configuration, and declarative configuration uses kubectl apply -f demo.yamlcommands. For declaratively configured resources, you can also use patchsubcommands on the command line to patch in the future to implement configuration modification and update.

When Deployment controls Pod rolling update, it can also configure Pod rolling update logic.

The relationship between Deployment, ReplicaSet, and Pod:

 

Deployment can be abbreviated as deploy
Next, look at the first-level fields of the Deployment resource list:

apiVersion: apps/v1
kind: Deployment
metadata: 
spec:

The spec of Deployment is not much different from that of ReplicaSet.
The spec field in Deployment:

replicas    <integer>   # Pod副本数量
selector    <Object>    # 标签选择器
template    <Object> -required-     # Pod模板
strategy    <Object>    # 定义Pod更新策略
paused      <boolean>
revisionHistoryLimit    <integer>

1. strategy (Pod update strategy)

spec:
  strategy:
    type:                     # <string>
    rollingUpdate:            # <Object>

The value of the strategy.type field:

  • Recreate: Rebuild and update, that is, delete a Pod and rebuild a Pod. When type,recreate the rollingUpdate field at the same level as type becomes invalid. When typeis rollingUpdate, the field at the same level as type rollingUpdatedefines the rolling update strategy.
  • RollingUpdate: rolling update

strategy.rollingUpdate field:

  • maxSurge: When the Pod is rolled and updated, the maximum number of Pod copies defined by replicas can be exceeded. There are two ways to get the value: 1. Specify a number directly (ex: 5); 2. Specify a percentage (ex: 10%).
  • maxUnavailable: When the Pod is rollingly updated, at most a few are unavailable. Assuming replicasthe value is 5 and maxUnavailablethe value is 1, then the number of Pods available is at least 5-1=4. This field can also specify a percentage as a value.

For example: when the Pod is rollingly updated, there can only be more than 2, and only one less can be defined as follows:

spec:
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 2
      maxUnavailable: 1

2、revisionHistoryLimit

revisionHistoryLimit <integer>Indicates the number of old ReplicaSets to keep to allow rollback. The default is 10. A value of 0 means that old versions are not saved.

3、paused

paused <boolean>Indicates whether to pause when deploying the Pod. Normally, it is not paused. The Pod will be deployed immediately after the command is executed.

4、template

template <Object> -required-The Pod template is consistent with the definition of the Pod template in the ReplicaSet.

5. Deployment resource list example

Edit the yaml file: deploy-demo.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-deploy
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: myapp
      release: canary
  template:
    metadata:
      labels:
        app: myapp
        release: canary 
    spec:
      containers:
      - name: myapp
        image: ikubernetes/myapp:v1
        ports:
        - name: http
          containerPort: 80

Here we create it declaratively:

[root@k8s-master manifests]# kubectl apply -f deploy-demo.yaml 
deployment.apps/myapp-deploy created

Take a look at the result created:

At the same time, a ReplicaSet will be created automatically, as shown in the figure below:

Automatically created ReplicaSet naming rules: deploy名称-Pod模板hash值.

View the created Pod, as shown below:

Pod command rules: deploy名称-Pod模板hash值-随机字符串.

vim deploy-demo.yamlAt this time, if we want to change the number of Pod copies, we can modify it directly replicas: 4and then execute it kubectl apply -f deploy-demo.yaml.

Note: Created declaratively, the same yaml file can be executed multiple times kubectl apply. Instead, kubectl createit can only be executed once.

In the example listing above, there are many fields that we did not define. Kubernetes will automatically fill in the default values. You can use the command: kubectl get deploy myapp-deploy -o yamlView.

5.1. Update operation

Let's modify deploy-demo.yaml first, and modify the container image version to v2:
vim deploy-demo.yaml

Deploy again:

[root@k8s-master manifests]# kubectl apply -f deploy-demo.yaml 
deployment.apps/myapp-deploy configured

Visit the new version Pod, you can see that it has been updated to v2 version.

View scrolling history:

[root@k8s-master manifests]# kubectl rollout history deployment myapp-deploy 
deployments "myapp-deploy"
REVISION  CHANGE-CAUSE
1         <none>
2         <none>

5.2. Update the resource list configuration by patching

[root@k8s-master manifests]# kubectl patch deployment myapp-deploy -p '{"spec": {"replicas": 5}}'
deployment.extensions/myapp-deploy patched

You can see that the number of Pod replicas has been updated to 5 immediately.

Modify the rolling update strategy by patching:

[root@k8s-master manifests]# kubectl patch deployment myapp-deploy -p '{"spec": {"strategy": {"type": "RollingUpdate", "rollingUpdate": {"maxSurge": 1, "maxUnavailable": 0}}}}'
deployment.extensions/myapp-deploy patched

Take a look at the rolling update strategy, the modification has been applied:

5.3. Suspend Pod update

For the sake of this example, we can first modify the image version and then immediately suspend the rolling update.
To modify the image version, in addition to using vimcommands to directly edit the yaml file or use kubectl patchpatching, you can also kubectl set imagedirectly modify the image. Example: kubectl set image deployment/nginx busybox=busybox nginx=nginx:1.9.1
The command is as follows:

[root@k8s-master ~]# kubectl set image deployment myapp-deploy myapp=ikubernetes/myapp:v3 && kubectl rollout pause deployment myapp-deploy 
deployment.extensions/myapp-deploy image updated
deployment.extensions/myapp-deploy paused

Open another terminal to monitor app=myappthe Pod with label:

As can be seen from the above figure, after the rolling update pause command is executed, a new Pod will be created and run first, and then paused, and one of the old Pods will not be deleted. At this time, there are 6 Pods, as shown in the figure below:

Through the command: kubectl rollout status deployment myapp-deployyou can see the status of rolling update, as shown in the figure below:

The prompt message in the figure tells us: Waiting for the deployment "myapp-deploy" deployment to complete: 1 of 5 new copies has been updated...

Next we resume Pod updates...

5.4. Resume Pod update

To resume Pod update use command:

[root@k8s-master ~]# kubectl rollout resume deployment myapp-deploy 
deployment.extensions/myapp-deploy resumed

Check the status of the rolling update:

As can be seen from the figure, the rolling update is completed.

Take a look at the status of the ReplicaSet:

From the figure above, you can see that the image has been updated to version v3, and 5 Pods are ready.

Next perform a rollback operation. . .

5.5. Rollback operation

If there is a problem with the new version of the application Pod and you want to roll back, you need to use the command:kubectl rollout undo

kubectl rollout undo deployment myapp-deploy --to-revision=1

--to-revision=1Indicates rolling back to version 1. If the --to-revision option is not specified, the default is to roll back to the previous version. It can be viewed by command kubectl rollout history deployment myapp-deploy.

After the rollback, the original version 1 becomes version 4, so the previous version of version 4 is version 3. If you roll back one version from version 4, you will roll back to version 3.

Let's take a look at the status of ReplicaSet:

As can be seen from the figure above, it has been rolled back to version v1, and 5 Pods in version v1 are ready. The number of Pods in the v3 version is 0.

4. DaemonSet resource list

The DaemonSet controller can run pods that can implement system-level management functions on designated nodes, and each designated node only runs one copy of such pods. You can also mount the directory of the node to the pod, and implement some management functions through the pod.

When DaemonSet defines a resource list, it is no longer necessary to use replicasa field to specify the number of replicas.

The subfields contained in the spec field in the DaemonSet resource manifest file:

revisionHistoryLimit    <integer>   # rs历史版本保存个数,与Deployment中的此字段意义相同。
selector    <Object>    # 标签选择器
template    <Object> -required-     # Pod模板
updateStrategy  <Object>            # Pod更新策略

Update strategy:

spec
  updateStrategy:
    type: RollingUpdate
    rollingUpdate: 
      maxUnavailable: 2

There are two Pod update strategies for DaemonSet: "RollingUpdate"and "OnDelete", only RollingUpdatewhen the type is , the fields at the same level as type rollingUpdatewill take effect. rollingUpdateThere is only one field under field maxUnavailable. That is to say, when the Pod of DaemonSet is updated, there can only be fewer, not more.

DaemonSet resource list example:

apiVersion: apps/v1
kind: DaemonSet
metadata: 
  name: filebeat-ds
  namespace: default
spec: 
  selector:
    matchLabels:
      app: filebeat
      release: stable
  template:
    metadata: 
      labels:
        app: filebeat
        release: stable
    spec: 
      containers:
      - name: filebeat
        image: ikubernetes/filebeat:5.6.5-alpine
        env: 
        - name: REDIS_HOST
          value: redis-svc
        - name: REDIS_LOG_LEVEL
          value: info

When defining the DaemonSet resource list, there is no need to specify replicasit.

Guess you like

Origin blog.csdn.net/qq_41210783/article/details/104512081