Detailed explanation of kubernetes - from entry to burial (updating~)

Introduction to k8s

Classification of orchestration tools

system level

ansible、saltstack

docker container

  • docker compose + docker swarm + docker machine

        docker compose: realize single-machine container orchestration
        docker swarm: integrate multiple hosts into one
        docker machine: initialize new host

  • mesos + marathon

        Mesos IDC operating system, resource allocation tool developed by Apache    

  • kubernetes

Development models and architecture

development mode

Waterfall development → iterative development → agile development → DevOps

Application architecture

Monolithic architecture → layered architecture → microservices    

DevOps

process:

        Requirements → Development → Testing → Delivery → Deployment

        The development of application models integrates development and operation and maintenance, breaking the direct barriers between the two.
CI (Continuous Integration):

        After development, merge the code into the code warehouse and then automatically build, deploy and test. If there are any problems, call them back to the development team. If there are no problems, they will be automatically delivered to the operation and maintenance side.
CD (continuous delivery): a>

        After testing, the final product is automatically packaged and stored in a place where it can be obtained by operation and maintenance or customers
CD (continuous deployment):

        After delivery, the development package is automatically pulled out and deployed automatically, and bugs that occur during runtime are automatically fed back to the development

        Common deployment scenarios

Advantages of containerized deployment:

        The early delivery and deployment process is extremely difficult due to environmental factors of different systems and different versions, but the implementation of containers can make it very easy to implement, and it can truly be written once and deployed multiple times
Microservices:

        Each application is disassembled into a tiny service, only one feature, such as disassembling a single application into hundreds of microservices, letting them cooperate with each other a i=1>Disadvantages:

        Distribution and deployment and the calling relationships between microservices have become extremely complex, and it is inevitable that some of the hundreds of microservices will have serious problems. It is not realistic to sort out and solve them manually alone. Containers and orchestration tools can Perfectly solve these problems

solution:

        It is the emergence of containers and orchestration tools that make microservices and DevOps easy to implement.

Features of kubernetes

Autoboxing
Self-healing
Horizontal scaling
Service discovery and load balancing
Automatic release and rollback
Key and configuration management
Batch processing execution

K8S composition and architecture

K8S composition architecture

The entire kubernetes consists of master, node, components, and accessories.

Kubernetes is a cluster system master/node with a central node architecture. Generally, there will be one or a group (three master) nodes as the master node for high availability. Each node node is used to contribute computing power, storage energy and other related resources. The node (running the container)

The four core components of master (run as three daemon processes):

  • API Server:

        Responsible for providing external service parsing requests, and needs to store the status information of each object in the entire cluster. When providing services to clients, using the https protocol requires a CA and certificate

  • scheduler:

        Responsible for observing the resources on each node, and finding qualified nodes based on the amount of resources required to create a container according to user requests, and then selecting the optimal node based on the optimal algorithm in the scheduling algorithm                

  • Controller Manager:

        Responsible for monitoring and ensuring the health of each controller, and providing controller manager redundancy on multiple masters

  • etcd shared storage:

        Since the API Server needs to store the status information of the objects in the entire cluster, the storage capacity is large, so it needs etcd to implement it; etcd is a key-value storage database system, and considering the importance of etcd, it generally requires three nodes to be high. Available, the default is to communicate through https, and two different interfaces are responsible for internal and external communication. Internal communication requires a point-to-point communication certificate; providing services to the client is achieved through another set of certificates.           

Node: In theory, any machine with computing power that can host containers can be used as a node. It has three core components:

  • kubelet (cluster agent):

        Communicate with the master, receive the various tasks scheduled by the master and try to let the container engine (the most popular container engine is doker) to execute

  • docker:

        Container engine, runs the containers in the pod

  • be a proxy:

        Whenever a pod is added or deleted and the service needs to change the rules, it needs to rely on kube-proxy. Whenever a pod is added or deleted, there will be a notification to all associated components. When the service receives the notification, it needs kube-proxy to modify the iptables of all nodes in the cluster.

Other components:                 

pod controller:

        The workload (workload) is responsible for monitoring whether each managed container is healthy and the number of application nodes is consistent, ensuring that the pod resources meet the expected status. If an exception occurs, a request is sent to the API server, and Kubernetes re-creates and starts the container. You can also Rolling update or rollback

There are many types of controllers:
  • ReplicaSet:

        Create a specified number of pod copies on behalf of the user to ensure that the number of pod copies meets the expected status, and support rolling automatic expansion and contraction functions.

        Helps users manage stateless pod resources and accurately reflects the user-defined target number. However, RelicaSet is not a directly used controller, but uses Deployment
        ReplicaSet consists of three main components :

                use using using pod copy ’        . The number of pods is insufficient, and new ones will be created based on the pod resource template

  • Deployment:

        Works on ReplicaSet and is used to manage stateless applications. It is currently the best controller. It supports rolling update and rollback functions; it also provides declarative configuration, which can be re-declared at any time and change the target desired state defined on the API Server at any time, as long as those resources support dynamic runtime modification.    
        Update and rollback functions:

                Deployment generally controls two or more ReplicaSets, and usually only activates and runs one. When updating, stops the containers on the activated ReplicaSet one by one and creates them on another ReplicaSet until completion. Rollback is the opposite action. ; You can control the update method and rhythm: for example, define how many nodes are updated at a time or temporarily add a few pods first and then delete the old pods to ensure that the number of pods that provide normal services remains consistent, etc.
         HPA:Secondary controller, responsible for monitoring resources and automatically scaling

  • DaemonSet:

        is used to ensure that only a specific pod copy is run on each node in the cluster or the node that matches the label selector. It is usually used to implement system-level background tasks. For example, the ELK service log collection and analysis tool. As long as a new node is added, a copy of this pod will be automatically added
        The service characteristics deployed by Deployment and DaemonSet:
                The service is stateless
                The service must be a daemon process

  • StatefulSet:

        Manage stateful applications

  • Job:

        As long as you complete it, you will exit immediately without restarting or rebuilding; if the task is not completed abnormal exit, the POD will be rebuilt. Suitable for one-time tasks

  • Cronjob:

        Periodic task control, no need to continue running in the background

  • Statefulset:

        Manage stateful applications, each Pod copy is managed individually

under:

        The smallest unit in kubernetes is a layer of simulated virtual machines encapsulated on the container. There can be multiple containers in a pod (usually one), and these multiple containers can share a network namespace and shared storage. Volume        
        Two categories:
                    Autonomous POD: If NODE fails, the POD of which will disappear POD controller management pod

label:

        Resources can be grouped by labeling. All objects can be labeled (pod is the most important type of object) format: key=value

selector (label selector):

        ​ ​​ Mechanism to filter qualified resource objects based on tags

service:

        It is a four-layer scheduler. In fact, an iptables dnat rule on the host and a virtual address on the host are responsible for the reverse proxy back-end container service, so if you create a service, the service will be reflected on every node in the entire cluster. The address that can be resolved is also fixed, but it is not attached to the network card; due to the frequent changes of back-end containers, constant scaling, creation and deletion, their addresses and container host names are not fixed, so the back-end container Every time there is a change, the service obtains their tag differences through the tag selector and records them as well as the corresponding address and container name. Therefore, when the service request comes in, it first goes to the service, finds the corresponding service through the service and forwards it to the container. , but after installing K8S, you need to configure its domain name resolution in the dns in the cluster. If you manually modify the service address or name, the resolution record in the dns will also be automatically modified. If a back-end service has multiple nodes, the service supports ipvs , the rules will be written into ipvs for load balancing

        The service rules use ipvs by default. If ipvs is not supported, it will automatically be downgraded to iptables.

monitor:

        Monitor related resource utilization, number of visits, and whether there are failures

Four core accessories:

  • dns

        To be discovered, the service needs the DNS service, and the DNS is also running in the pod.

  • Heapster/Metrics Server:

        Used to collect and aggregate cluster- and container-level resource usage and metric data for monitoring and automatic scaling.

  • Kubernetes Dashboard:

        ​​​​Provides a web-based user interface for visual management and monitoring of Kubernetes clusters.

  • Ingress Controller:

        used to route external network traffic to services within the cluster. Common Ingress Controllers include Nginx Ingress Controller and Traefik.

Access process

        ​​​Create an NMT on K8S


        Customer → LBaas → node interface → nginx_service → nginx container → tomcat_service → tomcat → mysql_service → mysql
        If you are using Alibaba Cloud, call the underlying LBaas on the cloud because node There may not be a network card on the computer, so it is responsible for scheduling the requested traffic to the external interface on the node, and then creates two pods through the controller to load nginx. A service is created on nginx to receive the traffic from the node interface and send it to the external interface. The traffic is forwarded to nginx; continue to create three pods through the controller to load tomcat, create a service on top of it, and create a service on top of two mysqls in the same way

K8S network and communication                           

 K8S has a total of three networks:

            Node network
            Cluster network (service network)
            Pod network

Three communication scenarios:

  • Communication between the same pod:

        Communication is through lo               

  • Communication between different pods:

         It communicates through the "overlay network". The IP and port messages that need to be delivered to the pod are externally encapsulated with an IP for transmission between different nodes. When the node receives the overlay network message, it is decapsulated, and then based on Use the internal IP and port to find the corresponding service. Of course, pods are dynamic and can be added or deleted at any time, so you need to use service to find the specific target pod.

  • Communication between pod and service:

        Because the service address is actually just the iptables rule address on the host machine, after the service is created, it will be reflected on all nodes in the entire cluster and the host machine iptables. When the pod needs to communicate with the service, the gateway will point to the address of docker0 Joe. In addition, the host machine You can also use docker0 Qiao as one of your own network cards, so the pod on the host can directly communicate with the service and send requests through the iptables rule table check and pointing.           

CNI plug-in system:

  • Container network interface:

        Responsible for accessing external plug-in network solutions, which can be run on pods as an additional use of the cluster and need to share the network namespace of the host.

  • main effect:

            ‐ to be responsible for providing IP addresses to pods and services                                                                 out out out out of pod’s to be responsible for network policy function-implementing pod isolation, allowing different pods to achieve interoperability or isolation by adding rules as required, to prevent hosting in Malicious services in pods intercept or attack services in other pods

                                    

  • Common plug-ins:

                flannel (overlay network): only supports network configuration
                calico (three-layer tunnel network): supports network configuration and network policy
                canel: Combine the first two, configure the network with Flannel, use Calico to configure network strategy
                ...

  • CA: K8S generally requires 5 sets of CAs

            between ETCD and ETCD
            between ETCD and API Server
            Api Server and users
            API Between Server and kubetel
            Between API Server and kube-proxy

k8s namespace:

        Cut a cluster into multiple namespaces according to each type of pod, and the namespace boundary of the cut is not a network boundary but a management boundary. For example, when you create a project, you can have multiple pods under the project, and you can manage the project in batches. pod, the entire project can be deleted in batches after the project is completed.

deploy

Common deployment methods:

  • Traditional manual deployment:

        All components are running as a systematic guardianship, which is very tedious and includes 5 CA

  • kubespray:

        ​​​​ kubespray is a deployment solution based on Ansible. It is executed according to the already written ansible script. It also runs all components as guardians of the system.

  • kubeadm (official installation tool provided):

        ​​​​Containerized deployment is more like a complete set of script encapsulation, which is relatively fast and simple


                    
kubeadm deployment

  • Server settings:

            use out out out out through out using using out through           through out out out through out out through out’s through out through ’ ’ s ’ ‐ ‐ ‐‐ ‐‐ ‐ ‐w and to .0.0/12                 pod network: 10.244.0.0/16 (flannel plug-in default)                 master+etcd: 172.20.0.70                 Node1: 172.20.0.66                 Node2: 172.20.0.67






  • Version settings

                    centos7.9
                    docker-ce-20.10.8、docker-ce-cli-20.10.8
                    kubelet-1.20.9、kubeadm-1.20.9、kubectl-1.20.9

  • installation steps:

                1. Install docker+kubelet+kubeadm on all hosts
                1. Initialize the master: kubeadm init (check the preconditions, and run the three components of the master and etcd as pod, CA, etc.)
                3. Initialize node: kubeadm join (check preconditions, run a kube-proxy, dns, authentication, etc. in the pod)
                

  • Installation reference documentation:

        Official Documents
        ! ! ! The installation reference document is more detailed and you can just follow this installation        

  • Precautions:

        Docker and k8s require specific versions. If the version gap is large, incompatibility problems will occur. Here are some common relationship between K8S and Docker versions:
                        K8S 1.22.x support docker 20.10.x
                        k8s 1.21.x support docker 20.10.x                         K8S 1.18.x support Docker 19.03 support .x                         K8S 1.19.x support Docker 19.03.x
                        K8S 1.20.x support docker 19.03.x


                     

  • Program related directories

                RPM-QL Kubelet
                /ETC/Kubernetes/Manifests ---- List Directory
                /ETC/SYSCONFIG/KUBELET ---- Configuration File
                /etc/systemd/system/kubelet.service
                /usr/bin/kubelet ---- Main program
                    

  • kubeadm initialization preparation (master)

                kubeadm init
                    kubeadm init Flags:
                        --apiserver-advertise-address string #Set apiserver listening address (default all)
                        -APISERVER-BIND-PORT INT32 #Set the APISERVER listening port (Default 6443)
                        -CERT-DIR String #Set the certificate path (default & quot;/etc/kubernetes/pki qi UOT; )      
                        -Config String #Set the configuration file
                        -IGNORE-PREFLIGHT-ERRORS Strings # ;IsPrivilegedUser,Swap' )
                        ​--pod -network-cidr string             #pod network (the default network of the flannel plug-in is 10.244.0.0/16)                         /span>

  • Copy the kubectl configuration file:

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config

               #admin.conf file: kubeadm helped us automatically initialize a configuration file that was used by kubectl as a configuration file to specify how to connect to the K8s API server and complete the authentication.

kubect command

  • kubectl:

It is the client program of the API Server. This client program is connected to the corresponding application program of the API server on the master node. This is also the only management entrance on the entire K8S cluster. kubectl is the client management tool for this management entrance.

  • Object-oriented:

        ​ ​ ​ kubectl is modeled on the object-oriented development language. All resources in k8s can be regarded as object classes, and classes have methods and attributes. Whenever a pod is created based on given parameters, it is equivalent to assigning a value to a method in the class, which is equivalent to instantiating an object.

  • Related commands:

        kubectl  
            version #Get version information
            cluster-info #Get cluster information
            api-versions #View Currently supported api version
  

            explaining <type>.<fieldName>[.<fieldName>] #Resource configuration list documents

                    Example:

                            kubectl explain pod.spec.volumes.pvc                                         
            Get the detailed description of the object The information includes all targets that can be used as objects such as: pod, service, node, deployment, etc.                  Deployment #View controller information                 Deployment #Get the current pods Running pod information                 ns #Get the namespace                     Deployment #View the pods created by this controller                 nodes #Node information                 cs #Component status information             get information
                    Example: kubectl describe pod nginx-deploy                
                






                Services/svc #View service information     

          
             (The following commands need to be appended to the object)
                -w #Continuous monitoring
                -n #Specify the namespace
                ​​​​​—show-labels​ #( After addition to the object), display all the label                   -l #output ownership, but only the label value of the specified label is displayed.                         Kubectl get pods -l app, run                 -l #tag filtration, only output the specified resource of the specified tag format (available Specify multiple):                         Equivalence relationship (only filter KEY/equal/not equal to):                             KEY_1,...KEY_N/KEY_1=VAL_1,.. ./KEY_1!=VAL_1,...                           use using with using using using using           through out through out through off ‐                     through through out through through ‐ through ‐ ‐‐ ‐‐‐ . One is matched                             The value of the key notin (val_1, ... value) is not one of the one in this episode. Matching                             ! The label of the key without this key is matched                     Example:                         Kubectl get pods -l app, release -show -show -labels                             kubectl get pods -l release=conary,app=myapp --show-labels                             kubectl get pods -l "release in (canary, beta )" #Filter the pods whose value of the release key contains canary and beta                                 canary,beta’s pod                      (the above commands need to be appended to the object)




















          

            run NAME #(controller name) Create and start
                --image='': #Specify image
                --PORT = & #39; & #39 ;: #--    
                -Dry-Run = TRUE #Simulation execution, non-actual execution
                - - <cmd> 6> -i                 -t                 Examples:                     Kubectl Run Nginx-DEPLOY-IMAGE = nginx: 1.14-ALPINE- -port=80 --dry-run=true                         Test: All nodes in the cluster can use curl to access this nginx, but it can only be accessed within the cluster. External access requires a special type service                         Note: Each node will automatically generate a bridge and interface, for example: cni0: inet 10.244.1.1 netmask 255.255.255.0 broadcast 10.244.1.255. All PODs on this Node will be at 10.244.1.0 network segment                     Kubectl Run Client -Image = Busybox -IT -Restart = NEVER                     View commands             Create #is used to create Kubernetes objects. If the corresponding resource already exists, an error will be returned. In this case, the original resource object needs to be deleted first, and then the creation operation is performed. If the resource object does not exist, the corresponding resource object will be automatically created. Suitable for scenarios in which resource objects are initialized                             Kubectl get pods -O wide










            

                -f FILENAME #Create resources based on resource configuration yaml file               

                [options]:#资源对象
                    namespace  NAME  
                    deployment  NAME    
                      --image=[]
                      --dry-run='none/server/client'
                示例:
                        kubectl create -f pod-demol.yaml
                        kubectl  create  deployment nginx-deploy --image=nginx:1.14-alpine                        

            apply (-f FILENAME | -k DIRECTORY) [options] #Used to declaratively create or update a Kubernetes object and can be declared multiple times. If the resource object already exists, it will first try to update the corresponding field values ​​and configurations. If it does not exist, the resource object will be automatically created. It is suitable for updating and modifying existing resource objects, because it compares the new YAML configuration file with the existing resource object configuration, and only updates the parts that need to be updated, without overwriting all existing configurations.
                    Example:
                            kubectl create -f pod-demol.yaml of Resources are partially updated. Compared with the kubectl apply command we often use, the kubectl patch command does not need to provide complete resource files when updating, only the content to be updated.                 Usage:                      kubectl patch (-f FILENAME | TYPE NAME) -p PATCH [options]                  options:                 Example: kubectl delete -f pod-demol.yaml             ] | TYPE [(NAME | -l label | --all)]) [options]                      kubectl patch deployment myapp-deploy -p '{"spec":{"replicas":5}}'                      Kubectl Patch pod value -P & #39; {& quot; spoc & quot;: { "containers":[{"name":"kubernetes-serve-hostname","image":"new image"}]}}'                  Example:                          -p #Patch








                                


                    

            expose the service port, which is equivalent to creating a service and then fixing the port mapping in the pod service on the service. After the creation is completed, it defaults to the ClusterIP type, so it is still inaccessible from outside the cluster, but the pod or node inside the cluster can. Access services through service. First, you need to find the service through DNS in the cluster, and then the service will generate an iptables or ipvs rule to schedule all requests to access the specified port of this service to each pod backend it is associated with using the label selector (you can use kubectl describe svc NAME to view the association) Which labels, you can use kubectl get pods --show-labels to display which labels the pod has)
                #Analysis steps:
                    . -Systems found that DNS's Service agent IP is 10.96.0.10;
                    Starts two services Nginx-DEPLOY in POD and 80 mapping port 80 to 80, and its service name is nginx. Because the default suffix of the domain name in the cluster is default.svc.cluster.local, the nginx domain name is nginx.default.svc.cluster.local; svc.cluster.local 10.96.0.10 The resolved address is nginx address               Usage:                 kubectl expose (-f FILENAME | TYPE NAME) [--port=port] [--protocol=TCP|UDP|SCTP] [--target-port=number-or-name] [--name=name] [--external-ip=external-ip-of-service] [--type =type] [options]                 #Note:                      (-f FILENAME | TYPE NAME): Specify the controller of the pod that needs to establish a connection with the created service Name                     [-Port = port]: The port of service                     [--target-port = number-or-name]: port in pod                 Examples:                             4 .ExternalName Service allows Service to be exposed externally Expose an external name that can be resolved to the DNS name of the external service. This type of Service does not create any load balancers or IPs in the cluster, but forwards requests directly to the specified external service. Generally, it is used when the pod in the cluster acts as a client and needs to request services outside the cluster.                                                                                                                      . At this time, when resolving the service domain name, it will be directly resolved to the back-end Pod service                                         Access path: Cluster external client → Load balancer (Lbaas) → NodeIP:NodePort→ClusterIP:ServicePort → PodIP:containerPort                             LoadBalancer can be used in cloud environments Automatically create external load balancers with the help of underlying lbaas and route client requests to pods. This type of Service is usually used in public or private cloud environments and can balance traffic to multiple cluster nodes, thereby improving service reliability and availability.                                 Access path: Cluster external client → [Load Balancer] → NodeIP:NodePort→ClusterIP:ServicePort → PodIP:containerPort                               2.NODEPORT allows to access the service from outside the cluster, and the port number provided to the end slogan to On the IP address of the Pod, the port will also be exposed to the IP address of the cluster node. This type is typically used for development and testing and is not recommended for use in production environments.                                 Access path: Client → Clusterip: Serviceport → PODIP: Containerport                             1.Clusterip is the default service type. It will create a virtual IP address used to connect clients and pods. This IP address can only be used within the cluster and cannot be accessed from outside the cluster. This type is typically used for backend services such as database or caching services.                         The main types of the 4 of #Service:                     [--type=type] : Type for this service: ClusterIP, NodePort, LoadBalancer, or ExternalName. Default is ' ClusteriPService ' ;.                     [--name=name]:service名称                              -target -port = 80 -ProTocol = TCP                      Visit test:                          wet -o -q nginx






















                         ; Object information, including all targets that can be used as objects such as: POD, Service, Node, Deployment, etc.                 Object name Note: You can directly change the service type, for example It will be automatically and randomly mapped to an external port, and you can get to view the port, so you can access the service in this pod through the external network using the IP:PORT of any node network in the cluster                 Example: a>                     kubectl edit svc nginx
                    


                

            scale 伸缩控制器规模
                Usage:
                  kubectl scale [--resource-version=version] [--current-replicas=count] --replicas=COUNT (-f FILENAME | TYPE NAME)
                  注释:
                     [--resource-version=version] [--current-replicas=count]:过滤条件
                Examples:
                     kubectl scale --replicas=5 deployment myapp

            set
                IMAGE update and upgrade mirror
                    usage:
                      Kubectl Set Image (-f Filename | Type Name)) CONTAINER_NAME_1=CONTAINER_IMAGE_1 ... CONTAINER_NAME_N=CONTAINER_IMAGE_N
                          #CONTAINER_NAME_1=CONTAINER_IMAGE_1: Specify which container and image in the pod, you can specify multiple, you can use kubectl describe to view the name of the container in the pod, In Containers :Next
                    Examples:
                        kubectl set image deployment myapp myapp=ikubernetes/myapp:v2
            
            rollout Manage one or more resources Usage:
                Usage:
                      kubectl rollout SUBCOMMAND [options]
                 Commands:
                     Status Show the status of the rollout
                         Undo rollback, default rollback to the previous version                          Usage:                              kubectl rollout undo (TYPE NAME | TYPE/NAME) [flags] [options]                      history History                               pause #Start, corresponding to pausing first             logs PodName                   -c containername (if there are multiple containers in the pod, you need to specify the container name)                 Example:                     Kubectl Logs MyApp-5D587C4D45-5T55G                     Kubectl Logs Pod-Demol -C MyApp             Labes configuration label < /span>                 #Tag is equivalent to a key-value pair attached to the object                 #Tag can be specified when the resource is created, or can be managed through commands after creation >                         kubectl explain pod.spec.nodeSelector:                         since the operator is the judgment condition, the operator is such as: In, NotIn,Exists, NotExists                     matchExpressions defines a tag selector based on the given expression, in the format {key:"KEY",operator:"OPERATOR",values:[VAL1,VAL2....]}                 Many types of resources need to be associated with other resources such as controllers and services based on tag selectors. At this time, two other fields are usually used to nest and define the label selector used:                   kubectl label [--overwrite] (-f FILENAME | TYPE NAME) KEY_1=VAL_1 ... KEY_N=VAL_N [--resource-version=version] k ubectl label pods foo unhealthy=yes --overwrite Modify overwrite Tags                 Usage:









                





            
















                                           nodeSelector <map[string] Command                 Usage:                     kubectl exec (POD | TYPE/NAME) [-c CONTAINER] [flags] -- COMMAND [args...] [options]                          because there are multiple containers in the pod need to specify the container name. myapp- 5d587c4d45-h4zxn date                     kubectl exec myapp-5d587c4d45-h4zxn -it -- /bin/sh                     kubectl exec pod-demol -c myapp -it -- / bin/sh                         Explain & lt; type & gt;. & lt; fieldname & gt; [. & lt; fieldname & gt;] #View K8S built -in format description               Example:                 kubectl explain pods.spec.containers                 kubectl explain rs                 kubectl explain pods
                                                
                    








                    




Pod life cycle:

All states from creation to completion:

        Pending, Running, Failed, Succeeded, Unknown
            #Pending: Pending, the conditions when requesting to create a pod cannot be met and the scheduling is not completed. For example, a node that matches the specified tag has been created but was not found.
           

Important behaviors in the Pod life cycle:

             Before starting the main container: Initialize the container. Before starting the main process, run the auxiliary container init to initialize the container. You can even run multiple and execute them serially. After running, exit and start the main process< a i=1> When the main container starts: Execute the post-startup hook              When the main process is running:                  Liveness: Liveness detection to determine whether the main container is running ( running), it may not be able to provide services to the outside world at this time                  Readiness: Readiness status detection to determine whether the main process in the container is ready and can provide services to the outside world                  (Any kind of detection can support three detection behaviors: execute a custom command; send a request to the specified TCP socket; send a request to the specified http service. And judge whether it is successful or failed based on the response code. That is, three types Probes: ExecAction, TCPSocketAction, HTTPGetAction)              After the main container ends: Hook after execution ends               





The process of creating and deleting Pods:

  • Pod creation process

             After creating the Pod, the user submits the request to the API Server
             The API Server will save the target status of the creation request to etcd
             The API Server will request The scheduler schedules, and saves the scheduling results and the scheduled node resources to etcd
             The kubelet on the node learns the task notification through the status change in the API Server, and gets the user's previous information Submitted creation list
             kubelet creates and starts pod on the current node according to the list
             kubelet sends the current pod status back to the API Server        

  • Deleting Pod process:

             To prevent data loss when deleting a Pod, a termination signal will be sent to each container in the Pod to allow the containers in the Pod to terminate normally. This termination will give a default grace period of 30 seconds. If the grace period has not expired, the kill signal will be sent again.

Resource allocation list

        APIserver only receives resource definitions in json format: when executing the command, it will automatically convert the content you give into json format and then submit it; view the yaml format resource configuration list. Example: kubectl get pod myapp-848 -o yaml

        ​ ​ ​A yaml file can define multiple resources, which need to be separated by -

The resource configuration list can be divided into the following five parts (the first-level fields in the configuration list):    

  • apiVersion:     

        Which API version and group of K8S does this object belong to? The general format is group/version. If group is omitted, it defaults to the core group.
                Example: The controller belongs to the app group; the pod belongs to the core group
                View command: kubectl api-versions

  • kind:      

        Resource category, used for initialization and instantiation into a resource object.

  • metadata: metadata       

        The information provided is: name, namespace, labels, annotations (resource annotations)
        selfLink: each resource reference method
                Fixed format: < /span>

                        api/GROUP/VERSION/namespaces/NAMESPACE/TYPE/NAME
                示例:selfLink: /api/v1/namespaces/default/pods/myapp-848b5b879b-8fhgq

  • spec:    

        The target state/specification expected by the user defines what characteristics the resource object to be created needs to have or what specifications it must meet (for example, how many containers should it have; which image should be used to create the container; what stains should be tolerated)

  • status: (read only)

        Displays the current status of the current resource. If the current status is inconsistent with the target status, K8S aims to infinitely approach or transfer the current status to the target status after the target status is defined to meet user needs. Note that this field is maintained by the K8S cluster. Users This field cannot be defined 

Configuration of autonomous Pod in the resource configuration list:

([] represents a list)

usage

      kubectl explain pods:
        metadata <Object>
            annotations <map[string]string>#Resource annotations, differences from label The reason is that it cannot be used to select resource objects, but is only used to provide "metadata" for objects, and there are no specification restrictions on characters such as size        
        spec <Object>
            RestartPolicy <string> #RestartPolicy
                Always
                    #Restart logic: Frequent restarts will put pressure on the server, so it will restart immediately for the first time. Each additional restart will increase the waiting time until it restarts every 5 minutes < A i = 7> Onfailure #When the state is wrong, restart                 NEVER #never restart             nodeselector & lt; map [string] string & gt; #node tag selectioner, You can choose which type of node the pod should run on             NodeName <string> #Specify the node where the pod runs             HostNetwork <boolean> #Set the pod to use directly The network namespace of the host. At this time, you can use the host ip:port to directly access the pod without exposing the port. Commonly used in DaemonSet controller             hostIPC <boolean> #Shared host IP             hostPID <boolean> #Shared host PID             containers <[]Object>              - name <string> -required-                   imagePullPolicy <string> Never : To use local mirrors, if you do n’t wait, you need to manually pull the mirror to the local                          iFNOTPRESENT: If there is a local mirror preferential use default usage:                      If the command is not provided and the docker image has ENTRYPOINT when making it, the ENTRYPOINT in the image will be run                     Entrypoint in configuration , but the given code does not run in the shell and needs to be specified manually                              Except for specific label Latest, the strategy of other label default mirrors to obtain the way is ifnotpRed Cannot Be Updated: If each field writes this information, it is not allowed to change the value of this field after being created. The exposed ports and port information, you can try to quote the name -name & lt; string & gt; containerport & lt; -required-                              If the mirror label is LATEST, then the default value is alway                     Example:                         Command:                         - "sleep 3600"h"                               - name <string> -required-                                   env <[]Object> #Set environment variables                     Escape, do not use referenced variable substitution                 If the command is not specified, the CMD in the image will be used as the parameter.







































                         exec <Object>                   preStop <Object> #Pre-termination hook                     exec <Object>                     httpGet <Object&g t;                     tcpSocket < Object> #Live status detection                                     exec & lt; object & gt; #detection behavior, execute custom command                     Command & lt; [] string & gt; #                   to the pod IP.                     TCPSOCKET & LT; Object & GT; #Detective behavior, send requests to the specified http service                     host & lt; string & gt; #defaults to the pod ip.                     PORT & LT; string> -required-                   failureThres                                                                                                                                            The default interval is 10 seconds                   timeoutSeconds     <integer> #Timeout time, default 1 second






















Example:

(autonomous Pod, not managed by the controller)
            vim pod-demal.yaml:


                apiVersion: v1
                kind: Pod
                metadata:
                  name: pod-demol
                  namespace: default
                  labels:
                    app: myapp1
                    tier: frontend
                spec:
                  containers:    
                  - name: myapp
                    image: ikubernetes/myapp:v1
                    ports:
                    -name: http
                      containerPort: 80
                    readinessProbe:
                      httpGet:
                        port: http
                        path: /index.html
                      initialDelaySeconds: 1
                      periodSeconds: 3                       
                    livenessProbe:
                      httpGet:
                        port: http
                        path: /index.html
                      initialDelaySeconds: 1
                      periodSeconds: 3                         
                  - name: busybox
                    image:  busybox:latest
                    command:
                    - "/bin/sh"
                    - "-c"
                    - "sleep 3600"
                  - name: busybox-liveness-exec-container
                    image:  busybox:latest
                    imagePullPolicy: IfNotPresent
                    command: ["/bin/sh","-c","touch /tmp/healthy; sleep 60; rm -rf /tmp/healthy; sleep 3600"]   
                    livenessProbe:
                      exec:
                        command: ["test","-e","/tmp/healthy"]
                      initialDelaySeconds: 1
                      periodSeconds: 3                 


            Create a container based on the created configuration manifest file:
                kubectl create -f pod-demol.yaml
                Delete:
                kubectl deleted -f pod-demol.yaml
                kubectl deleted pods pod-demol

Pod controller configuration in the resource configuration list

ReplicaSetController

usage

        kubectl explain rs/ReplicaSet:
          kind <string>
          metadata <Object>
          spec < ;Object> template <Object> #Template, this template is nested with the Pod configuration list               metadata <Object> #Pod’s metadata               spec <Object> #Pod's spec                 matchExpressions <[]Object>                 matchLabels <map[string]string>






Example:

(You can directly modify the yaml configuration list content of the running controller through kubectl edit rs myapp to achieve expansion and contraction, image update, etc., but to update the image, you need to manually delete the container and automatically create it before it will be the latest image)

        apiVersion: apps/v1
        kind: ReplicaSet
        metadata:
          name: myapp
          namespace: default
        spec:
          replicas: 2
          selector:
            matchLabels:
              app: myapp
              release: canary
          template:
            metadata:
              name: myapp-pod
              labels:
                app: myapp
                release: canary
                environment: qa
            spec:
              containers:
              - name: myapp-container
                image: ikubernetes/myapp:v1
                ports:
                - name: http
                  containerPort: 80    

deploymentcontroller

(Most configurations are similar to ReplicaSet)

usage

        kubectl explain deploy/deployment:    
          spec <Object>
            revisionHistoryLimit <integer> #How many historical versions are saved, the default is 10
            string> : Rolling update is the default update method. The update strategy needs to be configured in the RollingUpdate of the superior configuration.               rollingUpdate >                        < ; #The maximum number of settings that are unavailable, <string> can be a number or a percentage, the default is 25%







Example:     
        apiVersion: apps/v1
        kind: Deployment
        metadata:
          name: myapp-deploy
          namespace: default
        spec:
          replicas: 3
          selector:
            matchLabels:
              app: myapp
              release: canary
          template:
            metadata:
              labels:
                app: myapp
                release: canary
            spec:
              containers:
              - name: myapp
                image: ikubernetes/myapp:v2
                ports:
                - name: http
                  containerPort: 80

DaemonSetController

(Except that the configuration file does not have replicas, most other configuration files are similar to the deployment controller and also support rolling update strategies)

usage

        Kubectl Explain ds/daemonset:
          SPEC & LT; Object & GT;
            UpdateStrategy & LT; Object & GT;
              RollingupDate & Lt ;Object> #Can be "RollingUpdate" or "OnDelete"(update when deleted). Default is RollingUpdate.

Example:
            apiVersion: apps/v1
            kind: DaemonSet
            metadata:
              name: filebeat-ds
              namespace: default
            spec:
              selector:
                matchLabels:
                  app: filebeat
                  release: stable
              template:
                metadata:
                  labels:
                    app: filebeat
                    release: stable
                spec:
                  containers:
                  - name: filebeat
                    image: ikubernetes/filebeat:5.6.5-alpine
                    env:
                    - name: REDIS_HOST
                      value: redis.default.svc.cluster.local
                    - name: REDIS_LOG_LEVEL
                    value: info   

Service configuration in the resource configuration list:

usage:

      kubectl explain svc/service:
          clusterIP <string>  
              :service IP, no need to specify because in the cluster It will be automatically assigned. If you want to specify it, be sure not to cause IP conflicts; it can also be set to None. Without a cluster IP, it is also called a headless service. Directly resolve to the backend Pod through the service domain name
        spec <Object>
          ports <[]Object>
            name             targetPort                                                                                                                      TCP           It will be effective only when ExternalName Service is used. The parsing result should be A record, which is rarely used           sessionAffinity <string> Requests with the same client IP are always dispatched to the same backend Pod               








ingress Control

Why use ingress Control

        Service is a four-layer scheduler, so it has a flaw. It only works on the fourth layer of the TCP/IP protocol stack or OS model. Therefore, if the user accesses an https request and the service uses a service proxy, the CA certificate and private key cannot be configured, and the seven-layer protocol of https cannot be scheduled. In this case, we can only configure https on each back-end server. Because https is expensive and slow, we need to uninstall it on the scheduler as much as possible, so we need a seven-layer scheduler, so that https is used on the external network and http is used on the internal network.       
        Solution 1:    
            Set up a unique pod to run a normal application such as nginx in the seventh-layer user space. When the user tries to access a certain service, we Instead of letting it reach the front-end service, it reaches the seventh-layer proxy pod first. Pod and pod can communicate directly through the same network segment of podIP to complete the reverse proxy
                Access path: https access by client outside the cluster→Load balancer (Lbaas)→NodeIP:NodePort→ClusterIP:ServicePort→ PodIP:containerPort (the seven-layer proxy service uninstalls https and reverse-proxyes http requests directly to the backend) → PodIP:containerPort (the container that actually provides services)
                Disadvantages: Multi-level scheduling leads to low performance
        Option 2: ingress Control
            Directly let the seven-layer agent pod share the network namespace of the node and listen to the host address port. And use the DaemonSet controller to start this container, and set node taint to control the number of nodes so that these nodes only run a seven-layer proxy service container. This proxy pod is also known as: ingress Control on K8s PodIP:containerPort (the container that actually provides services)                 You can set up 4 virtual hosts with different host names or ports. Each host acts as a proxy for a set of backends                     Option 2: Through URL mapping, each path is mapped to a set of backend services



Seven-layer agency service:

  • HAProxy: used less often
  • nginx: default
  • Traefik: For the development of microservices, it can monitor changes in its own configuration files and automatically reload the configuration files.
  • Envoy: It is more popular among servicemesh networks. It is mostly used in microservices. It can also monitor changes in its own configuration files and automatically reload the configuration files.

IP is not fixed problem      

        If nginx is used to backend the seven-layer proxy server to the backend, the IP is not fixed because the backend is a pod container. How to modify upsteam on nginx to confirm the backend IP
        Solution Solution:
                Set up a service in their middle layer, but this service is not used as a proxy and is only responsible for grouping back-end resources.
                There is an ingress resource on K8S (note Different from ingress Control), it can read the backend IP contained in the group to which each service belongs, define it as upsteam server and inject it into the nginx configuration file and trigger nginx to reload the configuration file

Install and deploy ingress-nginx Control

(Note that k8s cannot be lower than 1.19)
        kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/ Static/Provider/CLOUD/Deploy.yaml
            #INGRESS Resources will automatically inject configurations.

Ingress configuration in the resource configuration list:

(After the ingress resource is created, it will be automatically injected into the ingress Control according to the configuration)
            kubectl   explain ingress:                                                                                                                                                a>                 ; #Declare which proxy to use                 < string> #Define backend service                     servicePort                                                                                                               host <string> a>                   .                












Ingress configuration list example (https protocol)

  • 1. Use openssh to generate a pair of self-effacing certificates
  • 2. Create a secret resource object to convert the self-signed certificate format

            kubectl create secret tls tomcat-ingress-secret --cert=tls.crt --key=tls.key
                2>                                                                                                                                                                                                                                               

  • 3.Create ingress

        #tls is the https protocol, secretName is the secret for the specified secret key authentication, tomcat.magedu.com is the backend name virtual host, myapp is the service to which the backend provides services, 80 is the port number of the service provided by the backend and has nothing to do with the service

      

storage volume

        Multiple containers in the same pod can share a storage volume: because the storage volume does not belong to the container but to the pod, the bottom layer of the container in the same pod replicates the same infrastructure image: pause.

Storage volume type:

  • Non-persistent storage:

            emptyDir: used as a temporary empty directory in the container, mapped to the host directory or memory. If the container deletes the data in this directory, it will also be deleted
            hostPath: host directory mapping , when pod changes are re-created, they need to be scheduled to the same host to achieve persistent storage
            gitRepo: similar to emptyDir, but with content. When creating a pod, the contents of a specified git repository will be cloned and mounted. If the content changes, it will not be automatically pushed or remotely synchronized. To achieve this function, a auxiliary container Sidecar is required.

  • Persistent storage:

            Traditional network storage: SAN (iscsi), NAS (NFS, CIFS)
            Partitioned storage (file system level, block level): glusterfs, rbd, ceph

  • Cloud storage:

            ebs、Azure Disk

Pod storage volume configuration in the resource configuration list:

        kubectl explain pod.spec
          volumes <[]Object> #Define storage volumes in pod
          - name
            emptyDir <Object> #emptyDir storage volume
              medium <string>
              sizeLimit <string>
            hostPath <Object> #hostPath storage volume
              path <string> -required- #Specify the path on the host machine
              type <string>   
                  #DirectoryOrCreate: If the directory on the mounted host does not exist, create it             i= 25> ubPathExpr <string>             volumeMounts <[]Object> #Mount storage volumes in containers           containers               server & lt ;string> -required- #nfs server address               readOnly <boolean>               path <string> -required- #nfs shared path                   >                                                                                                                                                              The same shared storage















Example

  • Example 1 (emptyDir storage volume):

        Define two containers in a pod, define storage volumes in the pod and mount them in the two containers respectively. The second container inputs content into HTML and displays it as the homepage of the first nginx container. (You can verify it through the curl container)                                  

  • Example 2 (hostPath storage volume)

        

  • Example 3 (NFS shared storage)

        ​ ​ Requires nfs-utils to be installed on each node

        

PVC 

Function and implementation logic

    PVC: Persistent Volume Statement. By separating k8s and storage volumes, users who really use storage do not need to care about the underlying storage implementation details to lower the user threshold. They only need to use PVC directly. PVC is a statement of user storage. PVC is similar to Pod. Pod consumes nodes, while PVC consumes PV resources. Pod can apply for CPU and memory, while PVC can apply for specific storage space and access mode.
    Storage logic: As shown in the figure above, you only need to define a storage volume and specify the storage size in the pod, and this storage volume must establish a direct binding relationship with the pvc of the current namespace, and the pvc must be related to the pv Establish a direct binding relationship, pv is the actual storage space on a certain storage device. Therefore, pvc and pv are abstract and standard resources on k8s. The way they are created is no different from creating services on k8s. The storage engineer first divides each storage space of the underlying storage. The k8s administrator needs to map each storage space to the system to make a pv and create a pvc. Users only need to define the pod themselves and use pvc in the pod. Between pvc and pv, when pvc is not called, it is useless and empty. When someone calls pvc, it needs to be bound to pv, which is equivalent to putting the data on pvc on pv. Which pv to bind to pvc depends on the size of the storage space defined by the pod creator, the access model (one person reads and one writes, many people read and many write, etc.), labels, etc. If a suitable pv cannot be found, it will block the hang rise. There is a one-to-one correspondence between PVC and PV. If a PV occupies a PVC, it cannot be occupied by other PVCs. After a PVC is created, it is equivalent to a storage volume. This storage volume can be accessed by multiple pods. Whether it needs to support multiple pod access is determined based on its defined access mode

The pod definition in the resource configuration list uses pvc

        kubectl  explain pod.spec.volumes
          persistentVolumeClaim   <Object>
            claimName    <string> -required-  #pvc名称
            readOnly    <boolean>

Define pvc in the resource configuration list  

        kubectl explain pvc  
          apiVersion: v1    
          kind: PersistentVolumeClaim
          metadata <Object>
         

                #Access model help document
            resources     <Object>   #Resource limitations. Set the minimum storage space. For example, if it is set to 10G, you need to find a pv with more than 10G for binding

              Limits <map[string]string> #Limit the maximum resources used

              2> storageClassName <string> #Storage class name             volumeMode <string> #Storage volume mode, through which type restrictions can be made             volumeName <string> ; #Volume name, one-to-one precise binding. If not specified, the best match will be selected from a large number of qualified pvs for binding



Define pv in resource configuration list

        ​​​​ #Similar to the usage of defining storage volumes in pod

        ​​​​ #Note that you must not add a namespace when defining pv during configuration, because pv is at the cluster level and does not belong to the namespace. All namespaces can be used, but pvc belongs to the namespace. Namespaces are also cluster-level resources

        kubectl  explain pv

          apiVersion: v1    
          kind: PersistentVolume

          metadata    <Object>

          spec    <Object>

            accessModes <[]string> #Specify the access model, which is a list so you can define multiple types

              ReadWriteOnce  #单路读写,简写RWO

              ReadOnlyMany  #多路只读,简写ROX

              ReadWriteMany  #多路读写,简写 RWX

                    #Access model help documentation  

            persistentVolumeReclaimPolicy    <string>  #回收策略,pvc被释放后被绑定的pv里面的数据处理方式

              Retain #Retain data

              Delete #Delete pv directly
              Recycle #Recycle data, set the data status bar pv to idle state

    

            Capacity <map[string]string> #Specify how much space to output

            nfs    <Object>

              path    <string> -required-

              readOnly    <boolean>

              server    <string> -required-

Example NFS implements pvc

  • nfs configuration

       /data/volumes/v1 172.20.0.0/16(rw,no_root_squash)

       /data/volumes/v2 172.20.0.0/16(rw,no_root_squash)

       /data/volumes/v3 172.20.0.0/16(rw,no_root_squash)

       /data/volumes/v4 172.20.0.0/16(rw,no_root_squash)

       /data/volumes/v5 172.20.0.0/16(rw,no_root_squash)

  • pv configuration

        

  • podconfigpvc

        

        ​ ​ ​ Check whether any pv is bound: kubectl get pv

StorageClass

The role and implementation logic of StorageClass

  • question:

        When applying for pvc, there may not be a ready-made pv that meets the specified conditions, and it is impossible for k8s and storage engineers to process it online all the time.

  • Solution logic:

        You can take out all the storage spaces of various underlying storages that have not been made into PV, and then classify them according to the performance (such as IO, etc.) and quality (such as redundancy, price, etc.) of various storages and define the storage class ( StorageClass). When pvc applies for a pv, it does not target a certain pv but a certain storage class (StorageClass). It can dynamically create a pv that meets the requirements and use such an intermediate layer to complete resource allocation.

        Note: The storage device must support restful style request interface

  • working logic

        For example, ceph has a lot of local disks that create a total of 4 PB of space. This space needs to be divided into its sub-unit images before they can be used. Each image is equivalent to a disk or a partition. When trying to request a 20G pv, request an image of 20G size immediately through ceph's restful interface, format it and export it through ceph, then define it as a 20G size pv in the cluster, and then bind it to pvc

        

Guess you like

Origin blog.csdn.net/weixin_43812198/article/details/134924812