kubernetes (k8s) from entry to the master controller --pod - Chapter 1 - Section V [Beginners]

A, pod controllers summary

  Pod controller What is it? Simply put Pod controller is used to manage the pod. Docker deployed a large number of students, there will be a little empathy for the difficult, difficult to manage and core components pod Today we learn can solve this problem. Crown Point is Pod controller is used to implement the intermediate layer management pod, the pod to ensure that the state of resources in line with expectations, while pod of resources fails, it will attempt to restart when the restart is invalid according to the policy, it will re-pod of New Resource . So pod controller contains what does? Following |:

  • ReplicationController: 1.2 before the old version management tools, follow-up will be replaced by Replicaset, here we look, there is such a thing can be.
  • Replicaset: pod used to create, delete, and update, ReplicaSet ensure that a specified number of runs pod, is determined by the number of label selector pod meets user-specified number of copies. (Management copy)
  • Deployment: Pod and the actual state of change to the target ReplicaSet state of the user. Support expansion, rollback, updating and other functions; stateless application best management controller; daemon class (act on replicaset, support rollback operations etc.)
  • Daemonset: Pod sure to run a copy on each node. Stateless services; daemon class
  • Job: perform a single task to run. It does not require continuous operation
  • Cronjob: run the task. It does not require continuous operation
  • StatefulSet: Management stateful applications, each application are managed separately. Management is too much trouble

Two, Replicaset

   replicaset used pod to create, delete, and update, ReplicaSet ensure that a specified number of runs pod, is determined by the number of label selector pod meets user-specified number of copies. (Management copy)

Then it is how to define it? Please refer to the following yaml file.

apiVersion: apps/v1
kind: ReplicaSet
metadata:
    name: rsdemo
    namespace: default
spec:
    replicas: 2
    selector:
        matchLabels:
            app: rsdemo
            release: can
    template:
        metadata:
            name: rsdemo1
            labels:
                app: rsdemo
                release: can
        spec:
            containers:
            - name: resdemocontainers
              image: ikubernetes/myapp:v1
              imagePullPolicy: IfNotPresent
              ports:
              - name: http
                containerPort: 80

Three, Deployment (focus)

  Pod ReplicaSet actual state and the target state is changed to the user. Support expansion, rollback, updating and other functions; stateless application best management controller; daemon class (act on replicaset, the rollback operation, etc.), file YAML defined as follows:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploymentdemo
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: myapp
      release: canary
  template:
    metadata:
      labels:
        app: myapp
        release: canary
        version: v1
    spec:
      containers:
      - name: myapp
        image: ikubernetes/myapp:v1
        imagePullPolicy: IfNotPresent
        ports:
        - name: http
          containerPort: 80

Version updates and rollbacks

 

Revised version of the update policy change :( less)

If you choose to update the policy, if you do not want to stop when you update the original container, you can modify maxUnavailable parameter to 0, how you use can execute the following command on the master node, for an explanation:

kubectl explain deployment.spec.strategy.rollingUpdate

View rollback version:

kubectl rollout history deploy  myapp-deploy

Roll back to the previous version:

kubectl rollout undo deployment/nginx-test

kubectl rollout undo deployment myapp

You can also use --revision parameter to specify a version of history:

kubectl rollout undo deployment/nginx-test --to-revision=2

kubectl rollout undo deployment myapp --to-revision=1

When you roll back to the first version, and then select a rollback will roll back to the last version of a modified version.

Resource update

  When you need to modify the image, the easiest way is to select Modify yaml file, and then apply -f reload the file, deployment will automatically read a rolling upgrade. But if you want to achieve by modifying deployment scripts, etc., to modify the configuration file is not a wise choice, and we can use the patch command:

kubectl patch

grammar

$ patch (-f FILENAME | TYPE NAME) -p PATCH

Examples

Node node using the patch update.

kubectl patch node k8s-node-1 -p '{"spec":{"unschedulable":true}}'

Update containers image

kubectl patch pod valid-pod -p '{"spec":{"containers":[{"name":"kubernetes-serve-hostname","image":"new image"}]}}'
Update deployment
kubectl patch deployment myapp-deploy -p '{"spec":{"strategy":{"rollingUpdate":{"maxSurge":1, "maxUnavailable":0}}}}'

Of course, if only to modify the container, we can use a simpler command set image

kubectl set image

Update existing resource object container mirror.

Resource object may be used include (case insensitive):

pod (po)、replicationcontroller (rc)、deployment (deploy)、daemonset (ds)、job、replicaset (rs)

grammar

$ image (-f FILENAME | TYPE NAME) CONTAINER_NAME_1=CONTAINER_IMAGE_1 ... CONTAINER_NAME_N=CONTAINER_IMAGE_N

Examples

The deployment vessel nginx mirrored set "nginx: 1.9.1".

kubectl set image deployment/nginx busybox=busybox nginx=nginx:1.9.1

All of deployment and rc nginx container image updates as "nginx: 1.9.1"

kubectl set image deployments,rc nginx=nginx:1.9.1 --all

All containers will daemonset abc mirrored updated to "nginx: 1.9.1"

kubectl set image daemonset abc *=nginx:1.9.1

Update nginx vessel image from a local file

kubectl set image -f path/to/file.yaml nginx=nginx:1.9.1 --local -o yaml

Four, Daemonset

Reference in this section: https://www.cnblogs.com/breezey/p/6582519.html

In a kind of scenario, we need to have a copy of the same application to run on all the kubernetes nodes, such as in the follow-up we will talk about how to collect logs kubernetes in the pod, while collecting logs, need each k8s node node running on a log collection process, such as fluentd. We know that under normal circumstances, kubernetes to run automatically assigned pod scheduling algorithm based on which its internal nodes, there is no way to guarantee a fluentd pod running on each node. This time, daemonsets scheduling mode came in handy. In simple terms, daemonsets is to make an application on all k8s cluster nodes are running a copy.

We look directly at the following example, starts a busybox on all nodes:

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: busybox
spec:
  template:
    metadata:
      labels:
        app: busybox
    spec:
      containers:
      - name: busybox
        image: myhub.mingyuanyun.com/library/busybox
        command:
        - sleep
        - "3600"

We can see by kubectl get daemonset started six of busybox pod, because we have six kubernetes nodes. In fact, we did not specify the number of copies, and this is the role of daemonsets:

NAME      DESIRED   CURRENT   READY     NODE-SELECTOR   AGE
busybox   6         6         6         <none>          1m

You can also refer to: https://www.cnblogs.com/xzkzzz/p/9553321.html  (start a redis configure the look filebeat)

 

This section will be updated follow-up,

Published 20 original articles · won praise 0 · Views 3803

Guess you like

Origin blog.csdn.net/Laughing_G/article/details/102910979