Kubernetes: (12) k8s controllers

Table of contents

One: Pod controller

Two: The relationship between the Pod and the controller

Three: Deployment (stateless)

Four: ReplicaSet (RS) 

4.1 Resource list file of ReplicaSet

4.2 The difference between stateful and stateless

4.3 The difference between regular service and headless service

4.4 Examples

4.4.1 Create DNS resources first​​​​​​​​

4.4.2 Then use the statefulset controller type to create nginx pod resources and create headless service resources

4.4.3 Summary

Five: DaemonSet

Six: Job  

Seven: CronJob 

八:Horizontal Pod Autoscaler(HPA) 

8.1 Install metrics-server

8.2 Prepare deployment and service 

8.3 Deploy HPA 

8.4 Testing

​​​​​​​

One: Pod controller

Pod is the smallest management unit of kubernetes. In kubernetes, pods can be divided into two categories according to how they are created:

  • Autonomous pods: Pods created directly by kubernetes, such pods will disappear after deletion and will not be rebuilt
  • Pod created by the controller: The pod created by kubernetes through the controller will be automatically rebuilt after the pod is deleted

What is a pod controller?

The Pod controller is the middle layer that manages pods. After using the Pod controller, you only need to tell the Pod controller how many and what kind of Pods you want. It will create Pods that meet the conditions and ensure that each Pod resource is in the The desired goal state of the user. If a pod resource fails while running, it reorders the pods based on the specified policies.

In kubernetes, there are many types of pod controllers, each of which has its own suitable scenarios. The common ones are as follows:

  1. ReplicationController: Compared with the original pod controller, it has been abandoned and replaced by ReplicaSet
  2. ReplicaSet: Ensure that the number of replicas is always maintained at the expected value, and support the expansion and contraction of the number of pods, and the upgrade of the image version
  3. Deployment: Control Pod by controlling ReplicaSet, and support rolling upgrade and rollback version
  4. Horizontal Pod Autoscaler: It can automatically adjust the number of Pods horizontally according to the cluster load to achieve peak shaving and valley filling
  5. DaemonSet: run on the specified Node in the cluster and only run one copy, generally used for daemon-like tasks
  6. Job: The pod it creates will exit as soon as it completes the task, without restarting or rebuilding, and is used to perform one-time tasks
  7. Cronjob: The Pod it creates is responsible for periodic task control and does not need to run continuously in the background
  8. StatefulSet: Manage stateful applications

Two: The relationship between the Pod and the controller

controllers: Objects that manage and run containers on the cluster are associated through label-selector

Pod implements application operation and maintenance through the controller, such as scaling, upgrading, etc.


Three: Deployment (stateless)

The main functions of Deployment are as follows:

  • Support all functions of ReplicaSet
  • Support release stop, continue
  • Support for rolling upgrades and rollback versions

Features:

  1. Deploy stateless applications, only care about the number, regardless of roles, etc., called stateless
  2. Manage Pods and ReplicaSets
  3. With online deployment, copy setting, rolling upgrade, rollback and other functions
  4. Provides declarative updates, such as only updating a new Image
  5. Application scenario: web service

[root@master ~]# vim nginx-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.15.4
        ports:
        - containerPort: 80

kubectl create -f nginx-deployment.yaml
kubectl get pods,deploy,rs

view controller

kubectl edit deployment/nginx-deployment

view historical version

kubectl rollout history deployment/nginx-deployment


Four: ReplicaSet (RS) 

The main function of ReplicaSet is to ensure the normal operation of a certain number of pods . It will continuously monitor the running status of these pods. Once a pod fails, it will restart or rebuild. At the same time, it also supports the expansion and contraction of the number of pods and the upgrade and upgrade of the image version.

4.1 Resource list file of ReplicaSet

apiVersion: apps/v1 # 版本号
kind: ReplicaSet # 类型       
metadata: # 元数据
  name: # rs名称 
  namespace: # 所属命名空间 
  labels: #标签
    controller: rs
spec: # 详情描述
  replicas: 3 # 副本数量
  selector: # 选择器,通过它指定该控制器管理哪些pod
    matchLabels:      # Labels匹配规则
      app: nginx-pod
    matchExpressions: # Expressions匹配规则
      - {key: app, operator: In, values: [nginx-pod]}
  template: # 模板,当副本数量不足时,会根据下面的模板创建pod副本
    metadata:
      labels:
        app: nginx-pod
    spec:
      containers:
      - name: nginx
        image: nginx:1.17.1
        ports:
        - containerPort: 80

Here, the configuration items that need new understanding are specthe following options:

replicas: Specifies the number of replicas, which is actually the number of pods created by the current rs. The default is 1
selector: selector, which is used to establish the relationship between the pod controller and the pod, using the Label Selector mechanism. Define the label on the pod template and define the selector on the controller to indicate which pods the current controller can manage
. pod definition

Features:

  • Solve the Pod independent life cycle, maintain the Pod startup sequence and uniqueness
  • Stable, unique network identifier, persistent storage (for example: etcd configuration file, if the node address changes, it will not be available)
  • Orderly and graceful deployment and expansion, deletion and termination (for example: mysql master-slave relationship, start the master first, then start the slave)
  • orderly, rolling update

Application Scenario: Database

4.2 The difference between stateful and stateless

no status:

  1. Deployment thinks all pods are the same
  2. Regardless of order requirements
  3. No need to consider which node node to run on
  4. Can expand and shrink at will

stateful:

  1. There are differences between instances, each instance has its own uniqueness, different metadata, such as etcd, zookeeper
  2. Asymmetric relationships between instances, and applications that rely on external storage.

4.3 The difference between regular service and headless service

service: A set of Pod access policies that provide communication between cluster-IP clusters, as well as load balancing and service discovery.
Headless service headless service:  does not need cluster-IP, directly binds the IP of a specific Pod (when the IP address of the Pod changes dynamically, it is often used to bind DNS access)

  1. ①Cluster_ip
  2. ②NodePort: Use the IP and port range of the node where the pod is located
  3. ③headless: directly use the pod's ip to expose
  4. ④LoadBalancer: load balancing (F5)
  5. ⑤hostport: directly use the IP and port range of the host

ps: There are mainly three ways for k8s to expose services: ingress loadbalancer (load balancer Ng, haproxy, KONG, traefil, etc. outside the SLB/ALB k8s cluster) service

4.4 Examples

4.4.1 Create DNS resources first​​​​​​​​

# vim pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: dns-test
spec:
  containers:
  - name: busybox
    image: busybox:1.28.4
    args:
    - /bin/sh
    - -c
    - sleep 36000
  restartPolicy: Never

创建dns资源
# kubectl create -f pod.yaml
 
# kubectl get pods

4.4.2 Then use the statefulset controller type to create nginx pod resources and create headless service resources

# vim sts.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: nginx
---
apiVersion: apps/v1
kind: StatefulSet  
metadata:
  name: nginx-statefulset  
  namespace: default
spec:
  serviceName: nginx  
  replicas: 3  
  selector:
    matchLabels:  
       app: nginx
  template:  
    metadata:
      labels:
        app: nginx  
    spec:
      containers:
      - name: nginx
        image: nginx:latest  
        ports:
        - containerPort: 80  

# kubectl create -f sts.yaml 
有状态化创建的pod,是自动进行打标签进行区分
# kubectl get pods
 
# kubectl get pods,svc
 
验证DNS解析
# kubectl exec -it dns-test.sh
解析pod的唯一域名和自身的IP

​​​​​​​

4.4.3 Summary

The difference between StatefulSet and Deployment:

The pod created by StatefulSet has identity!

Three elements of identity:

Domain Name nginx-statefulset-0.nginx
Hostname nginx-statefulset-0
Storage (PVC ) 


Five: DaemonSet

Features:

Run a Pod on each Node The
newly joined Node will also automatically run a Pod

Application scenarios: Agent, monitoring

DaemonSet | Kubernetes​​​​​​​

Official case (monitoring)

Create an nginx pod resource with the DaemonSet controller type, without specifying replicas, it will be created according to the number of node nodes, if you add a new node node, it will also create a pod for the new node node

# vim ds.yaml 
apiVersion: apps/v1
kind: DaemonSet 
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.15.4
        ports:
        - containerPort: 80
 

# kubectl apply -f ds.yaml 
 
DaemonSet会在每个node节点都创建一个Pod
# kubectl get pods
 
# kubectl get pods -o wide
如果再新加一个node节点,也会给新node节点创建pod


Six: Job  

Job is divided into ordinary tasks (Job) and scheduled tasks (CronJob)

one-time execution

Application scenarios: offline data processing, video decoding and other services

​​​​​​Jobs | Kubernetes

Official Case: Applying Big Data Scenarios

Example: 

Use the job controller type to create resources, execute the command to calculate pi, and keep the last 2000 digits. The creation process is equivalent to calculation. The default
number of retries is 6 times, which is changed to 4 times. When an exception is encountered, the Never state will restart, so To set the number of times.

# vim job.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: pi
spec:
  template:
    spec:
      containers:
      - name: pi
        image: perl:5.34.0
        command: ["perl",  "-Mbignum=bpi", "-wle", "print bpi(2000)"]
      restartPolicy: Never
  backoffLimit: 4

在node节点提前下载perl镜像,因为镜像比较大所以提前下载好
node1 node2节点:
# docker pull perl
 
创建过程等同于在计算
# kubectl apply -f job.yaml 
job.batch/pi created
 
查看状态
# kubectl get pods
# kubectl describe pod pi-tkdlc 
 
查看日志,看计算结果,结果输出到控制台
# kubectl logs pi-tkdlc
3.141592653589793.............................................共2000位

View the log, see the calculation results, and output the results to the console 


Seven: CronJob 

Periodic tasks, like Linux's Crontab.
periodic task

Application scenarios: notification, backup

​​​​​​Running Automated Tasks with a CronJob | Kubernetes

Example: 

Output a message every minute, print hello

# vim cronjob.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: hello
spec:
  schedule: "*/1 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: hello
            image: busybox
            args:
            - /bin/sh
            - -c
            - date; echo Hello from the Kubernetes cluster
          restartPolicy: OnFailure
 
busybox 是linux内核镜像

# kubectl create -f cronjob.yaml
# kubectl get cronjob
NAME    SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE    AGE
hello      */1 * * * *         False         0         <none>          25s
# kubectl get pods
查看日志,内容输出到控制台
# kubectl logs hello-1659577380-lfblb
Mon Feb 17 05:29:09 UTC 2020
Hello from the Kubernetes cluster
 
等待一分钟后又会再执行一次
# kubectl get pods
# kubectl logs hello-1659577380-lfblb
 
最后删除资源,不然第二天服务器宕机
# kubectl delete -f cronjob.yaml


八:Horizontal Pod Autoscaler(HPA) 

Kubernetes expects to realize the automatic adjustment of the number of pods by monitoring the usage of Pods, so a controller such as Horizontal Pod Autoscaler (HPA) was created.

HPA can obtain the utilization rate of each Pod, then compare it with the indicators defined in HPA, and calculate the specific value that needs to be scaled, and finally realize the adjustment of the number of Pods. In fact, HPA, like the previous Deployment, is also a Kubernetes resource object. It determines whether it is necessary to adjust the number of copies of the target Pod by tracking and analyzing the load changes of all target Pods controlled by RC. This is the implementation of HPA principle.

Next, let's do an experiment

8.1 Install metrics-server

# 安装git
[root@master ~]# yum install git -y
# 获取metrics-server, 注意使用的版本
[root@master~]# git clone -b v0.3.6 https://github.com/kubernetes-incubator/metrics-server
# 修改deployment, 注意修改的是镜像和初始化参数
[root@master~]# cd /root/metrics-server/deploy/1.8+/
[root@master 1.8+]# vim metrics-server-deployment.yaml
按图中添加下面选项
hostNetwork: true
image: registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server-amd64:v0.3.6
args:
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP

​​​​​​​​​​​​​​

# 安装metrics-server
[root@k8s-master01 1.8+]# kubectl apply -f ./
 
# 查看pod运行情况
[root@k8s-master01 1.8+]# kubectl get pod -n kube-system
metrics-server-6b976979db-2xwbj   1/1     Running   0          90s
 
# 使用kubectl top node 查看资源使用情况
[root@k8s-master01 1.8+]# kubectl top node
NAME           CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
k8s-master01   289m         14%    1582Mi          54%       
k8s-node01     81m          4%     1195Mi          40%       
k8s-node02     72m          3%     1211Mi          41%  
[root@k8s-master01 1.8+]# kubectl top pod -n kube-system
NAME                              CPU(cores)   MEMORY(bytes)
coredns-6955765f44-7ptsb          3m           9Mi
coredns-6955765f44-vcwr5          3m           8Mi
etcd-master                       14m          145Mi
...
# 至此,metrics-server安装完成

8.2 Prepare deployment and service 

Create the pc-hpa-pod.yaml file:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  namespace: dev
spec:
  strategy: # 策略
    type: RollingUpdate # 滚动更新策略
  replicas: 1
  selector:
    matchLabels:
      app: nginx-pod
  template:
    metadata:
      labels:
        app: nginx-pod
    spec:
      containers:
      - name: nginx
        image: nginx:1.17.1
        resources: # 资源配额
          limits:  # 限制资源(上限)
            cpu: "1" # CPU限制,单位是core数
          requests: # 请求资源(下限)
            cpu: "100m"  # CPU限制,单位是core数
# 创建service
[root@k8s-master01 1.8+]# kubectl create -f pc-hpa-pod.yaml
# 查看
[root@k8s-master01 1.8+]# kubectl get deployment,pod,svc -n dev
NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx   1/1     1            1           47s
 
NAME                         READY   STATUS    RESTARTS   AGE
pod/nginx-7df9756ccc-bh8dr   1/1     Running   0          47s
 
NAME            TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/nginx   NodePort   10.101.18.29   <none>        80:31830/TCP   35s

8.3 Deploy HPA 

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: pc-hpa
  namespace: dev
spec:
  minReplicas: 1  #最小pod数量
  maxReplicas: 10 #最大pod数量
  targetCPUUtilizationPercentage: 3 # CPU使用率指标
  scaleTargetRef:   # 指定要控制的nginx信息
    apiVersion: apps/v1
    kind: Deployment
    name: nginx
# 创建hpa
[root@k8s-master01 1.8+]# kubectl create -f pc-hpa.yaml
horizontalpodautoscaler.autoscaling/pc-hpa created
 
# 查看hpa
[root@k8s-master01 1.8+]# kubectl get hpa -n dev
NAME     REFERENCE          TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
pc-hpa   Deployment/nginx   0%/3%     1         10        1          62s

8.4 Testing

Use the pressure test tool to 192.168.223.30:31830perform pressure test on the service address, and then check the changes of hpa and pod through the console

hpa change

[root@k8s-master01 ~]# kubectl get hpa -n dev -w
NAME   REFERENCE      TARGETS  MINPODS  MAXPODS  REPLICAS  AGE
pc-hpa  Deployment/nginx  0%/3%   1     10     1      4m11s
pc-hpa  Deployment/nginx  0%/3%   1     10     1      5m19s
pc-hpa  Deployment/nginx  22%/3%   1     10     1      6m50s
pc-hpa  Deployment/nginx  22%/3%   1     10     4      7m5s
pc-hpa  Deployment/nginx  22%/3%   1     10     8      7m21s
pc-hpa  Deployment/nginx  6%/3%   1     10     8      7m51s
pc-hpa  Deployment/nginx  0%/3%   1     10     8      9m6s
pc-hpa  Deployment/nginx  0%/3%   1     10     8      13m
pc-hpa  Deployment/nginx  0%/3%   1     10     1      14m

deployment changes 

[root@k8s-master01 ~]# kubectl get deployment -n dev -w
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   1/1     1            1           11m
nginx   1/4     1            1           13m
nginx   1/4     1            1           13m
nginx   1/4     1            1           13m
nginx   1/4     4            1           13m
nginx   1/8     4            1           14m
nginx   1/8     4            1           14m
nginx   1/8     4            1           14m
nginx   1/8     8            1           14m
nginx   2/8     8            2           14m
nginx   3/8     8            3           14m
nginx   4/8     8            4           14m
nginx   5/8     8            5           14m
nginx   6/8     8            6           14m
nginx   7/8     8            7           14m
nginx   8/8     8            8           15m
nginx   8/1     8            8           20m
nginx   8/1     8            8           20m
nginx   1/1     1            1           20m

pod changes

[root@k8s-master01 ~]# kubectl get pods -n dev -w
NAME                     READY   STATUS    RESTARTS   AGE
nginx-7df9756ccc-bh8dr   1/1     Running   0          11m
nginx-7df9756ccc-cpgrv   0/1     Pending   0          0s
nginx-7df9756ccc-8zhwk   0/1     Pending   0          0s
nginx-7df9756ccc-rr9bn   0/1     Pending   0          0s
nginx-7df9756ccc-cpgrv   0/1     ContainerCreating   0          0s
nginx-7df9756ccc-8zhwk   0/1     ContainerCreating   0          0s
nginx-7df9756ccc-rr9bn   0/1     ContainerCreating   0          0s
nginx-7df9756ccc-m9gsj   0/1     Pending             0          0s
nginx-7df9756ccc-g56qb   0/1     Pending             0          0s
nginx-7df9756ccc-sl9c6   0/1     Pending             0          0s
nginx-7df9756ccc-fgst7   0/1     Pending             0          0s
nginx-7df9756ccc-g56qb   0/1     ContainerCreating   0          0s
nginx-7df9756ccc-m9gsj   0/1     ContainerCreating   0          0s
nginx-7df9756ccc-sl9c6   0/1     ContainerCreating   0          0s
nginx-7df9756ccc-fgst7   0/1     ContainerCreating   0          0s
nginx-7df9756ccc-8zhwk   1/1     Running             0          19s
nginx-7df9756ccc-rr9bn   1/1     Running             0          30s
nginx-7df9756ccc-m9gsj   1/1     Running             0          21s
nginx-7df9756ccc-cpgrv   1/1     Running             0          47s
nginx-7df9756ccc-sl9c6   1/1     Running             0          33s
nginx-7df9756ccc-g56qb   1/1     Running             0          48s
nginx-7df9756ccc-fgst7   1/1     Running             0          66s
nginx-7df9756ccc-fgst7   1/1     Terminating         0          6m50s
nginx-7df9756ccc-8zhwk   1/1     Terminating         0          7m5s
nginx-7df9756ccc-cpgrv   1/1     Terminating         0          7m5s
nginx-7df9756ccc-g56qb   1/1     Terminating         0          6m50s
nginx-7df9756ccc-rr9bn   1/1     Terminating         0          7m5s
nginx-7df9756ccc-m9gsj   1/1     Terminating         0          6m50s
nginx-7df9756ccc-sl9c6   1/1     Terminating         0          6m50s

 

Guess you like

Origin blog.csdn.net/ver_mouth__/article/details/126147174
Recommended