k8s analysis-Pod, Deployment, Service

1. Concept introduction ( original address )

1.Pod

Kubernetes uses Pods to manage containers, and each Pod can contain one or more closely related containers. Pod is a set of closely related containers. They share PID, IPC, Network, and UTS namespace. They are the basic unit of Kubernetes scheduling. Multiple containers in the Pod share the network and file system, which can be combined to complete the service in a simple and efficient way of inter-process communication and file sharing

A pod yaml file

apiVersion: v1 #Version number
kind: Pod       #Pod
metadata: #Metadata
  name: string #Pod name
  namespace: string #The namespace to which the Pod belongs
  labels: #Custom labels
    -name: string #Custom label name
  annotations: #Custom annotation list
    - name: string
spec: The detailed definition of the container in #Pod
  containers: #Pod list of containers
  -name: string #container name
    image: string #The image name of the container
    imagePullPolicy: [Always | Never | IfNotPresent] #Alawys means to download the mirror image IfnotPresent means to use the local mirror first, otherwise download the mirror, Nerver means to use only the local mirror
    command: [string] #Container startup command list, if not specified, use the startup command used during packaging
    args: [string] #Container startup command parameter list
    workingDir: string #The working directory of the container
    volumeMounts: #Mounting to the storage volume configuration inside the container
    -name: string #quote the name of the shared storage volume defined by the pod, use the volume name defined in the volumes[] part
      mountPath: string #The absolute path of the storage volume mount in the container, which should be less than 512 characters
      readOnly: boolean #Whether it is in read-only mode
    ports: #List of port library numbers that need to be exposed
    -name: string #Port number name
      containerPort: int #The port number that the container needs to monitor
      hostPort: int #The port number of the host where the container is located, the default is the same as the Container
      protocol: string #Port protocol, support TCP and UDP, default TCP
    env: #List of environment variables that need to be set before the container runs
    -name: string #environment variable name
      value: string #The value of the environment variable
    resources: #Resource limits and requested settings
      limits: #Resource limit settings
        cpu: string #Cpu limit, the unit is the number of cores, which will be used for the docker run --cpu-shares parameter
        memory: string #Memory limit, the unit can be Mib/Gib, which will be used for the docker run --memory parameter
      requests: #Resource request settings
        cpu: string #Cpu request, the initial available number of container startup
        memory: string #Memory request, the initial available number of container startup
    livenessProbe: #Set the health check of a container in the Pod. The container will be automatically restarted when the detection does not respond several times. The check methods are exec, httpGet and tcpSocket. You only need to set one of these methods for a container.
      exec: #Set the inspection method in the Pod container to exec
        command: [string] #exec command or script that needs to be developed
      httpGet: #Set the health check method of a container in the Pod to HttpGet, and you need to specify the Path and port
        path: string
        port: number
        host: string
        scheme: string
        HttpHeaders:
        - name: string
          value: string
      tcpSocket: #Set the health check mode of a container in the Pod to tcpSocket mode
         port: number
       initialDelaySeconds: 0 #The first detection time after the container is started, in seconds
       timeoutSeconds: 0 #The timeout period of waiting for a response to the container health check probe, in seconds, the default is 1 second
       periodSeconds: 0 #The periodical detection time setting for container monitoring inspection, in seconds, the default is once every 10 seconds
       successThreshold: 0
       failureThreshold: 0
       securityContext:
         privileged:false
    restartPolicy: [Always | Never | OnFailure]#Pod's restart strategy, Always means that once the operation is terminated in any way, the kubelet will restart, OnFailure means that only Pod exits with a non-zero exit code will restart, Nerver means no longer restart the Pod
    nodeSelector: obeject #Set NodeSelector means that the Pod is scheduled to the node that contains this label, specified in the format of key: value
    imagePullSecrets: #Pull the secret name used when mirroring, specified in key: secretkey format
    - name: string
    hostNetwork:false #Whether to use the host network mode, the default is false, if set to true, it means to use the host network
    volumes: #Define a list of shared storage volumes on the pod
    -name: string #Shared storage volume name (there are many types of volumes)
      emptyDir: {} #Storage volume of type emtyDir, a temporary directory with the same life cycle as Pod. Null
      hostPath: string #Storage volume of type hostPath, indicating the directory of the host where the Pod is mounted
        path: string #The directory of the host where the Pod is located, which will be used for the mount directory during the same period
      secret: #Storage volume of type secret, mount the cluster and the defined secret object to the inside of the container
        scretname: string  
        items:     
        - key: string
          path: string
      configMap: #Storage volume of type configMap, mount the predefined configMap object to the inside of the container
        name: string
        items:
        - key: string
          path: string

Among them, it is worth explaining that there are a few points

  1. The apiVersion version in k8s, you can use the command kubectl api-versions to view, here are three common
  • alpha: development version, may contain errors, support for this feature may be discarded at any time
  • beta: The beta version, the software has been well tested, the enabling function is considered safe, the details may be changed, but the function will not be deleted in subsequent versions
  • stable: stable version, which will appear in subsequent software versions
  1. Harbor defaults to the https protocol. If you want to pull the image of the harbor through the http protocol, you need to modify the /etc/docker/daemon.json file on each node of k8s
{
    "insecure-registries":["http://your-harbor-url"]
}

Then restart docker

systemctl dadmon-reload 
systemctl restart docker

2.Deployment

  • Define Deployment to create Pod and ReplicaSet
  • Rolling upgrade and rollback application
  • Expansion and shrinkage
  • Pause and resume deployment

A deployment yaml file (limited to space, many contents have been omitted)

apiVersion: extensions/v1beta1   
kind: Deployment                 
metadata:
  name: string #Deployment name
spec:
  replicas: 3 #number of target replicas
  strategy:
    rollingUpdate:  
      maxSurge: 1 #Upgrade 1 pod at the same time during rolling upgrade
      maxUnavailable: 1 #Maximum number of unavailable pods allowed during rolling upgrade
  template:         
    metadata:
      labels:
        app: string #template name
    sepc: #define a container template, which can contain multiple containers
      containers:                                                                   
        - name: string                                                           
          image: string 
          ports:
            - name: http
              containerPort: 8080 #Exposing port to service

How to upgrade and roll back applications in k8s

When performing a rolling upgrade, first update the mirror version in the yaml file, and then set the values ​​of maxSurge and maxUnavailable according to the setting requirements to complete

How does k8s complete expansion and contraction

Republish after modifying the value of replicas

3.Service

apiVersion: v1
kind: Service
matadata: #metadata
  name: string #service name
  namespace: string #namespace
  labels: #Custom label attribute list
    - name: string
  annotations: #Custom annotation attribute list
    - name: string
spec: #Detailed description
  selector: [] #label selector configuration, select Pod with label as the management scope
  type: string #service type, specify the access method of service, the default is clusterIp
  clusterIP: string #Virtual service address
  sessionAffinity: string #Whether to support session
  ports: #service The list of ports that need to be exposed
  -name: string #Port name
    protocol: string #Port protocol, support TCP and UDP, default TCP
    port: int #The port number that the service monitors
    targetPort: int #The port number that needs to be forwarded to the backend Pod
    nodePort: int #When type = NodePort, specify the port number mapped to the physical machine
  status: #When spce.type=LoadBalancer, set the address of the external load balancer
    loadBalancer: #External load balancer
      ingress: #external load balancer
        ip: string #IP address value of external load balancer
        hostname: string #The host name of the external load balancer

 

2. Relationship analysis ( original address )


Deploy controls RS, and RS controls Pod. This whole set provides stable and reliable Service.
Analysis The
following is the analysis process

First, we start with the smallest scheduling unit pod.
There is currently a pod in my k8s cluster, its name is mq-svc-5b96bf78d9-brpjw

[root@VM_0_17_centos ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
mq-svc-5b96bf78d9-brpjw 1/1 Running 0 51m to
see its details

[root@VM_0_17_centos ~]# kubectl describe pod mq-svc-5b96bf78d9-brpjw
Name:           mq-svc-5b96bf78d9-brpjw
Namespace:      default
Node:           10.0.0.17/10.0.0.17
Start Time:     Fri, 17 Aug 2018 17:24:44 +0800
Labels:         pod-template-hash=1652693485
                qcloud-app=mq-svc
Annotations:    <none>
Status:         Running
IP:             172.16.0.39
Controlled By:  ReplicaSet/mq-svc-5b96bf78d9
Containers:
  queue-mq:
    Container ID:   docker://700cdc55c111a413faaa8cabb8680009d2663701ccbe84b8a50ea6e6fe1d538c
    Image:          rabbitmq:management
    Image ID:       docker-pullable://rabbitmq@sha256:0b36ea1a8df9e53228aaeee277680de2cc97c7d675bc2d5dbe1cc9e3836a9d9f
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Fri, 17 Aug 2018 17:24:49 +0800
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     500m
      memory:  1Gi
    Requests:
      cpu:        250m
      memory:     256Mi
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-vzhz4 (ro)
Conditions:
  Type           Status
  Initialized    True 
  Ready          True 
  PodScheduled   True 
Volumes:
  default-token-vzhz4:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-vzhz4
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     <none>
Events:
  Type    Reason                 Age   From                Message
  ----    ------                 ----  ----                -------
  Normal  Scheduled              51m   default-scheduler   Successfully assigned mq-svc-5b96bf78d9-brpjw to 10.0.0.17
  Normal  SuccessfulMountVolume  51m   kubelet, 10.0.0.17  MountVolume.SetUp succeeded for volume "default-token-vzhz4"
  Normal Pulling 51m kubelet, 10.0.0.17 pulling image "rabbitmq:management"
  Normal Pulled 51m kubelet, 10.0.0.17 Successfully pulled image "rabbitmq:management"
  Normal Created 51m kubelet, 10.0.0.17 Created container
  Normal Started 51m kubelet, 10.0.0.17 Started The container
actually has sensitive information. The pod is managed by a ReplicaSet named mq-svc-5b96bf78d9, so we believe that RS is a higher-level component specifically used to manage pods. One RS will manage a batch of pods.

Controlled By:  ReplicaSet/mq-svc-5b96bf78d9
 

And the practices that take place in pods are also operations on containers, such as pulling images, starting containers, etc.

Events:
  Type    Reason                 Age   From                Message
  ----    ------                 ----  ----                -------
  Normal  Scheduled              51m   default-scheduler   Successfully assigned mq-svc-5b96bf78d9-brpjw to 10.0.0.17
  Normal  SuccessfulMountVolume  51m   kubelet, 10.0.0.17  MountVolume.SetUp succeeded for volume "default-token-vzhz4"
  Normal  Pulling                51m   kubelet, 10.0.0.17  pulling image "rabbitmq:management"
  Normal  Pulled                 51m   kubelet, 10.0.0.17  Successfully pulled image "rabbitmq:management"
  Normal  Created                51m   kubelet, 10.0.0.17  Created container
  Normal  Started                51m   kubelet, 10.0.0.17  Started container
 

Next, let’s take a look at the details of this RS
[root@VM_0_17_centos ~]# kubectl describe rs mq-svc-5b96bf78d9
Name: mq-svc-5b96bf78d9
Namespace: default
Selector: pod-template-hash=1652693485,qcloud-app =mq-svc
Labels: pod-template-hash=1652693485
                qcloud-app=mq-svc
Annotations: deployment.changecourse=Updating
                deployment.kubernetes.io/desired-replicas=1
                deployment.kubernetes.io/max-replicas=2
                deployment .kubernetes.io/revision=2
                description=Service based on rabbitmq.
Controlled By: Deployment/mq-svc
Replicas: 1 current / 1 desired
Pods Status:    1 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  pod-template-hash=1652693485
           qcloud-app=mq-svc
  Containers:
   queue-mq:
    Image:      rabbitmq:management
    Port:       <none>
    Host Port:  <none>
    Limits:
      cpu:     500m
      memory:  1Gi
    Requests:
      cpu:        250m
      memory:     256Mi
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Events:
  Type    Reason            Age   From                   Message
  ----    ------            ----  ----                   -------
  Normal  SuccessfulCreate  50m   replicaset-controller  Created pod: mq-svc-5b96bf78d9-r8n8t
  Normal  SuccessfulCreate  49m   replicaset-controller  Created pod: mq-svc-5b96bf78d9-l4zj2
  Normal  SuccessfulCreate  49m   replicaset-controller  Created pod: mq-svc-5b96bf78d9-m8tmv
  Normal  SuccessfulDelete  49m   replicaset-controller  Deleted pod: mq-svc-5b96bf78d9-m8tmv
  Normal  SuccessfulCreate  49m   replicaset-controller  Created pod: mq-svc-5b96bf78d9-r9wkj
  Normal  SuccessfulCreate  49m   replicaset-controller  Created pod: mq-svc-5b96bf78d9-8wzpq
  Normal  SuccessfulCreate  49m   replicaset-controller  Created pod: mq-svc-5b96bf78d9-d8gwc
  Normal  SuccessfulDelete  49m   replicaset-controller  Deleted pod: mq-svc-5b96bf78d9-d8gwc
  Normal  SuccessfulDelete  49m   replicaset-controller  Deleted pod: mq-svc-5b96bf78d9-8wzpq
  Normal  SuccessfulDelete  49m   replicaset-controller  Deleted pod: mq-svc-5b96bf78d9-l4zj2
  Normal  SuccessfulDelete  49m   replicaset-controller  Deleted pod: mq-svc-5b96bf78d9-r9wkj
  Normal  SuccessfulDelete  45m   replicaset-controller  Deleted pod: mq-svc-5b96bf78d9-r8n8t
 

Key Information

Controlled By: Deployment/mq-svc
This RS is controlled by the Deployment named mq-svc. From this point of view, Deployment is a component that is one level higher than RS for managing RS.

Events that occur at the RS level are all operations on pods, pods are created, and pods are deleted

Events:
  Type    Reason            Age   From                   Message
  ----    ------            ----  ----                   -------
  Normal  SuccessfulCreate  50m   replicaset-controller  Created pod: mq-svc-5b96bf78d9-r8n8t
  Normal  SuccessfulCreate  49m   replicaset-controller  Created pod: mq-svc-5b96bf78d9-l4zj2
  Normal  SuccessfulCreate  49m   replicaset-controller  Created pod: mq-svc-5b96bf78d9-m8tmv
  Normal  SuccessfulDelete  49m   replicaset-controller  Deleted pod: mq-svc-5b96bf78d9-m8tmv
  Normal  SuccessfulCreate  49m   replicaset-controller  Created pod: mq-svc-5b96bf78d9-r9wkj
  Normal  SuccessfulCreate  49m   replicaset-controller  Created pod: mq-svc-5b96bf78d9-8wzpq
  Normal  SuccessfulCreate  49m   replicaset-controller  Created pod: mq-svc-5b96bf78d9-d8gwc
  Normal  SuccessfulDelete  49m   replicaset-controller  Deleted pod: mq-svc-5b96bf78d9-d8gwc
  Normal  SuccessfulDelete  49m   replicaset-controller  Deleted pod: mq-svc-5b96bf78d9-8wzpq
  Normal  SuccessfulDelete  49m   replicaset-controller  Deleted pod: mq-svc-5b96bf78d9-l4zj2
  Normal  SuccessfulDelete  49m   replicaset-controller  Deleted pod: mq-svc-5b96bf78d9-r9wkj
  Normal  SuccessfulDelete  45m   replicaset-controller  Deleted pod: mq-svc-5b96bf78d9-r8n8t
 

接下来,我们来看看Delpoyment
[root@VM_0_17_centos ~]# kubectl describe deploy  mq-svc
Name:                   mq-svc
Namespace:              default
CreationTimestamp:      Fri, 17 Aug 2018 17:21:13 +0800
Labels:                 qcloud-app=mq-svc
Annotations:            deployment.changecourse=Updating
                        deployment.kubernetes.io/revision=2
                        description=Service based on rabbitmq.
Selector:               qcloud-app=mq-svc
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        10
RollingUpdateStrategy:  0 max unavailable, 1 max surge
Pod Template:
  Labels:  qcloud-app=mq-svc
  Containers:
   queue-mq:
    Image:      rabbitmq:management
    Port:       <none>
    Host Port:  <none>
    Limits:
      cpu:     500m
      memory:  1Gi
    Requests:
      cpu:        250m
      memory:     256Mi
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Progressing    True    NewReplicaSetAvailable
  Available      True    MinimumReplicasAvailable
OldReplicaSets:  <none>
NewReplicaSet:   mq-svc-5b96bf78d9 (1/1 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  58m   deployment-controller  Scaled up replica set mq-svc-5b96bf78d9 to 2
  Normal  ScalingReplicaSet  57m   deployment-controller  Scaled up replica set mq-svc-5b96bf78d9 to 3
  Normal  ScalingReplicaSet  57m   deployment-controller  Scaled up replica set mq-svc-5b96bf78d9 to 4
  Normal  ScalingReplicaSet  57m   deployment-controller  Scaled down replica set mq-svc-5b96bf78d9 to 3
  Normal  ScalingReplicaSet  57m   deployment-controller  Scaled up replica set mq-svc-5b96bf78d9 to 6
  Normal ScalingReplicaSet 57m deployment-controller Scaled down replica set mq-svc-5b96bf78d9 to 4
  Normal ScalingReplicaSet 56m deployment-controller Scaled down replica set mq-svc-5b96bf78d9 to 2
  Normal ScalingReplicaSet 53m deployment-controller Scaled down replica set mq-svc-5b96bf78d9 1 It
can be seen that at the deployment level, it is no longer controlled by other components, and the transition of his state is generated as an API is called. We have seen that the events that occur at the deployment level are generally creating a service, rolling a service upgrade, or operating an RS scaling Pod cluster.

Finally, it is also very clear here that service is actually a stable service provided to the outside on this whole set of foundations.

Guess you like

Origin blog.csdn.net/THMAIL/article/details/107312208