K8s introduce the principle of non-state controller

Pod Controller:

  ReplicationController: Early K8s the only one controller, but later found to make it a complete all tasks, too complicated and therefore discarded..
  ReplicaSet: It is used to help users create Pod specified number of copies, and ensure that a copy of Pod the number of users has been to meet the number of copies desired.
        The number of copies "Duotuishaobu" and other mechanisms. [It is considered that the new version of ReplicationController. ]
   It consists of three main components:
     1. The user desired number of copies of Pod.
        2. Select the tab: Use it to select their own management of Pod
        3. Pod template: If an insufficient number of copies tag selector to select, then according to Pod to create a new template.
  Deployment: it is used to help us manage the best controller of stateless Pod.
      it supports rolling update, roll back, it also provides the ability declarative configuration, it allows us to configure the logic according to the statement of the future defined,
      all defined resources may at any time be re-statement, as long as the resource definition allows dynamic updates.
  DaemonSet: Pod used to ensure that only runs on a cluster node controller. It is generally used for building the system-level applications.
      It does not define the number of copies, the number of copies to be created automatically based on the size of the cluster, namely: if a newly added cluster
      node node, DaemonSet will automatically use the template to create a Pod Pod on the new node, and precise guarantees on each node only
      run a Pod. Therefore this controller, there must be Pod template and tag selector.
      It has another role: that is used to control the Pod run only on the specified node to meet the conditions, and to ensure that it runs only a precise Pod.
    It manages Pod:
      1. must be a daemon to continue to run in the background.
      2. It does not terminate the moment, even if not busy, it can also monitor changes in user or file a request for a socket
  Job: mainly used to create a complete specified tasks Pod, once the task is completed, it will withdraw; but if the task is not completed, it will restart until the task is completed.
  Cronjob: it is the definition of a periodic task, the Pod start, and Pod job start similar, will terminate after the restart task execution is completed.
      At the same time it will automatically handle a task has not been completed, the next start time has come to question.


Controller Example:
  ReplicaSet controller Example:
  # If you want to understand the meaning of each of the following parameters, you can view EXPLAIN replicaset kubectl

  vim replicaset.yaml
    apiVersion: Apps / v1 # In general, the basic format of a manifest file:
    kind: ReplicaSet # apiVersion, kind, metadata that is necessary for the following spec is a necessary parameter replicaSet of.
    the Metadata: # to view the spec to support those parameters: kubectl explain replicaset.spec can.
      name: myapp
      namespace: default
    spec:
      Replicas: 2 # Here is the number of copies defined ReplicaSet.
      Selector:
       matchLabels:
        App: myapp
        Release: Canary
      template: # View template support those parameters: kubectl explain replicaset.spec.template to see.
        the Metadata:
         name: myapp- pod # usually Pod namespace and namespace controller must be consistent, it can be omitted.
         labels: Pod # here to create labels must meet the labeling and selector, otherwise the Pod will be created down permanently.
          App: myapp
          Release: Canary
        spec:
         Containers:
         - name: myapp-Container
          Image: busybox
          the ports:
          - name: HTTP
           containerPort: 80

# Start:
  -f replicaSet.yaml the Create kubectl
  kubectl GET PODS
  kubectl GET rs

  kubectl the Delete PODS PodName
  kubectl GET # PODS can see the number of Pod less under the control of the controller, will automatically be created.

  # next test, under the Pod controller how much more will
    kubectl label pods PodName release = canary, app = myapp # Pod to add tags
    kubectl get pods --show-labels
    Note:
        this test also shows: when defining controller manages Pod, must be precisely defined Pod label,
     try to avoid, Pod label created by the user and the controller you define exactly in line, resulting in tragedy!
    Also:
      Service and controllers are not directly related, but they all use the tag selector to get the Pod,
      it also means a Pod Service can manage multiple controllers created.

Configuration # controller is to support dynamic updates.
  kubectl edit rs replicaset # to modify a number of copies and then see the effect it.

# It also supports dynamic updates Pod mirrored version.
  Kubectl Edit rs replicaset
  kubectl GET rs -o Wide
  Note:
   here to modify Pod mirrored version, but in fact mirrored version Pod is running and has not changed.
  At this time, if you manually delete a Pod, the new Pod will use the new version created.
  This can be easily achieved gold bird released updated, namely: delete a, let a Pod to use the new version, the new version of Pod
  will be part of the flow, this part of the flow can be used as test traffic, if two days later, found no users complain Bug, you can
  manually Pod delete one by one and replaced with a new version of the Pod.

 

Deployment:

  

    It is a controller ReplicaSet controller, namely: Deployment Management Controller Pod it is not directly used, it is used to manage
  ReplicaSet controller advantage of this is that dynamic updates can be rolled back and, as shown in FIG. , it can achieve
  a variety of ways to update, on the gray figure shows the update process, it ReplicaSetV1 control Pod, one at a time
  removed and rebuilt on ReplicaSetV2, if after ReplicaSetV2 on the line, there is a problem, can quickly return rolled V1.
  generally Deployment does not directly delete ReplicaSet, it will retain the 10 version to be rolled back to use.

 It is achieved by updating the update Pod particle size gradation, canary update, update cyan.
 It controls the Pod to create a particle size is such:
  1. When deleting, there must be guaranteed Pod 5, but can be more than a temporary Pod, it will create one, delete all until a replacement.
  2. There are five must be guaranteed, but can be a little temporary Pod, it will first remove one, creating a until the Replace All.
  3. There are five must be guaranteed, but one more, but also a little, then it will create a delete two, and then create two, and then delete the two until all replaced.
  4. There is also a temporary permit more than double, it will create a one-time five, then directly replace with a new Pod, in the old Pod deleted.

Deployment control parameters:
  at Strategy: # Set updating policy
  type <Recreate | RollingUpdate> # Specifies the update policy type .Recreate: it is a delete, a reconstruction.
                           RollingUpdate: This is a rolling update.
  rollingUpdate:
    maxSurge: when used to define the rollover, allows up to increase the number of copies it supports two values:
          5: that increase the number of copies up to five.
          20%: If you have 10 copies of Pod, a 20% increase is up to two copies.
    MaxUnavailable: Maximum number of unavailable It also supports integer or percentage..
  RevisionHistoryLimit <Int> # Set the default save several historical ReplicaSet version. 10
  paused <boolean> # whether the pause before the update

vim  myapp-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
     name: myapp-deploy
     namespace: default
spec:
     replicas: 2
     selector:
         matchLabels:
             app:myapp
             release: canary
     template:
            metadata:
                labels:
                     app:myapp
                     release: canary
            spec:
                 cantainers:
                  -   name: myapp
                       image: busybox
                       ports:
                       -   name:http
                           containerPort: 80

 

# After writing the above list file, you can perform:
  kubectl the Apply -f Deploy-demo.yaml

  kubectl GET Deploy
  kubectl GET rs

  Deploy-demo.yaml vim
    number # modify copies of 4

  kubectl the Apply -f Deploy-modified demo.yaml # is running again apply, apply configuration can be applied many times, and it will automatically reflect the changes in the list.

  kubectl DESCRIBE Deploy Deploy # myapp-view Annotaions, RollingUpdateStrategy.


  # dynamic rollover test
    vim deploy-demo.yaml
      modified version of the image is the new version of

  terminal 2:
    kubectl GET PODS App = myapp the -l -w

  terminal 1:
    kubectl the Apply -f Deploy-demo.yaml

  then to the terminal 2 to view the rollover effect.
    kubectl get rs -o wide # can view more than a rs, rs and legacy data is 0.

    kubectl rollOut History Deployment-Deploy myapp # View update history scroll

# to dynamically update the configuration file to patch by the way:
  kubectl patch deployment myapp-deploy -p ' { "spec": { "replicas": 5}}'
    Note: this may dynamically change the parameters of the controller Deployment-Deploy MyApp.

  kubectl GET # can be seen the number of Pod PODS We have created dynamically.

# Deployment policy controller dynamically modifying the rollover:
  kubectl Patch Deployment Deploy -p-MyApp '{ "spec": { "Strategy": { "rollingUpdate": { "maxSurge":. 1, "maxUnavailable": 0}}} } '

  kubectl DESCRIBE Deploy-Deploy MyApp view update policy #

# Pod modified under the new version Deployment mirror controller, and to achieve canary update
  terminal. 1:
    kubectl GET PODS -l MyApp App = -w

  terminal 2:
    kubectl SET Deployment-Deploy MyApp MyApp Image = busybox: V3
    kubectl rollOut PAUSE Deployment Deploy MyApp-

  terminal 3:



    Deployment MyApp-rollOut Resume kubectl Deploy

    kubectl -o Wide GET RS


# roll back to the version specified
  kubectl rollOut History MyApp Deployment-Deploy
    deployment.extensions / MyApp-Deploy
    the REVISION the CAUSE the CHANGE-
      0 <none>
      2 <none>
      . 3 <none>
      . 4 <none>
      . 5 <none>

  kubectl rollOut MyApp Use the undo Deployment-Deploy --to-Revision. 3 =

  kubectl rollOut History # Deployment Deploy-MyApp be see fourth edition of the first edition.
    deployment.extensions / MyApp-Deploy
    the REVISION CHANGE- the CAUSE
      0 <none>
      4 <none>
      5 <none>
      6 <none> to return to # 3 version,Shown here is version 6
      7 <none> # and then they try to return to version 2, version 7 is shown here.

 

daemonSet controller:
  # View daemonSet controller syntax
    kubectl EXPLAIN ds
    kubectl EXPLAIN ds.spec

vim  ds-demo.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
   name: filebeat-ds
   namespace: default
spec:
  selector:
     matchLabels:
       app:filebeat
       release:stable
  template:
    metadata:
       labels:
         app: filebeat
         release: stable
    spec:
      containers:
      -  name: filebeat
         image: ikubernetes/filebeat:5.6.5-alpine
         env:          #env的用法: kubectl explain  pods.spec.containers.env
         - name: REDIS_HOST
           value: redis.default.svc.cluster.local
         - name: REDIS_LOG_LEVEL
           value: info

  # Then use apply to apply the configuration of the controller to create a list of daemonSet Pod resources.
  Kubectl apply -f ds-demo.yaml

# Multiple related resource definitions together in a way, this is the common way to work. 
Vim ds - demo.yaml 
apiVersion: Apps / v1 
kind: Deployment 
the Metadata: 
  name: Redis 
  namespace : default 
spec: 
  Replicas: 1 
  Selector: 
    matchLabels: 
     App: Redis 
     Role: Logstor 
  Template: 
    Metadata: 
     Labels: 
        App: Redis 
        Role: Logstor 
     spec: 
       Containers:
        -   name: Redis 
          Image: Redis: 4.0 - Alpine 
          the ports:
           - name: redis
            containerPort:6379
---      #这表示分隔符.
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: filebeat-ds
  namespace: default
spec:
  selector:
    matchLabels:
       app:filebeat
       release:stable
  template:
    metadata:
      labels:
       app: filebeat
       release: stable
    spec:
      containers:
      - name: filebeat
        image: ikubernetes/filebeat:5.6.5- Alpine 
        env:
         -   name: REDIS_HOST 
           . Value: redis default .svc.cluster.local 
           #fiebeat find the front-end service redis by the environment variable, pay attention to the name specified here is redis front-end service name, not free to write! 
        -   name: REDIS_LOG_LEVEL 
           value: info

  # Because after two created by daemonSet Pod resources, in order to avoid conflicts, delete Pod resources.
  Kubectl the Delete -f ds-demo.yaml

# redis and then re-create Pod fliebeat two resources simultaneously.
  Kubectl the Apply -f ds-Demo .yaml

# redis then created as a service to expose redis service port, so filebeat can send the logs to the service, the service log and then forwarded to the redis.
  kubectl eXPOSE Deployment redis --port = 6379
  kubectl GET svc # View created service

if verified # redis receive filebeat log.
  kubectl GET PODS # redis such as in running state can see the landing.
  kubectl Exec -it redis-5b5d .... - / bin / SH
  / # netstat -tnl the Data
  / data # nslookup redis.default.svc.cluster.local # redis resolve the domain name.
  / redis-cli the Data -h redis.default.svc.cluster.local # # verify that you can log in directly through the domain name redis
  / the Data # Keys * after landing # redis success, see if key is created.

# Log in to view the status filebeat
  Exec -it filebeat-ds kubectl-h776m - / bin / SH
  / PS UAX #
  / # # CAT /etc/filebeat/filebeat.yml define its view of the configuration file redis.
  / # # see the printenv environment variable.
  / # kill -1 1 # -1 is used to signal his filebeat reread the configuration file, which can lead to Pod restart, but all right.

# in addition, you can see daemonSet -o wide definition Pod must be run on one node at a Pod.
  kubectl GET PODS -l app = filebeat -o wide #daemonSet defined Pod does not run on the primary node,
                      # largely because the previous deployment, defines the master node is not scheduled.
# when the test and found the following problems:
  # kubectl the node GET
    NAME the ROLES of AGE VERSION the STATUS
    192.168.111.80 the Ready node 2d21h v1.13.5
    192.168.111.81 the Ready node 2d21h v1.13.5
    192.168.111.84 the Ready, SchedulingDisabled master 2d21h v1.13.5 # primary node configured not dispatched, i.e. stain.
    NotReady 192.168.111.85, SchedulingDisabled Master 2d21h v1.13.5

  # kubectl GET POD App = filebeat -l -o Wide
    NAME RESTARTS of AGE the IP NODEs the STATUS READY Nominated NODEs READINESS GATES
    filebeat 8gfdr 1/1 Running-DS-0 192.168 10.10.171.2 13m. 111.81 <none> <none>
    filebeat-DS-13m. 1 ml2bk 1/1 Running 10.10.240.193 192.168.111.84 <none> <none> # work on even the master node !!
    filebeat-DS-0 Running zfx57 1/1 10.10.97.58 192.168.111.80 13m <none> <none>
    # from the run state point of view, in line with the characteristics DaemonSet, each node runs a. but why it can run on the Master, for unknown reasons?



#daemonSet also support rolling update how to update?
  kubectl EXPLAIN ds.spec.updatestrategy.rollingUpdate # only supports a first cut in to create one, because a node can only run a Pod.

# define daemonSet controller filebeat-ds rollover Pod under management the image upgrade to filebeat: 5.6.6-alpine
  Image daemonsets filebeat the SET-kubectl filebeat ds = ikubernetes / filebeat: 5.6.6-Alpine

# view the update process Pod of
  kubectl get pods -w

update strategy DaemonSet of:
  updateStrategy:
  of the type <RollingUpdate | OnDelete> #OnDelete: that is created when deleted.

Dynamic Update # DaemonSet:
  kubectl GET DS
  kubectl Image daemonSets filebeat SET-DS = ikubernetes filebeat / filebeat: 5.6.6-Alpine

  terminal 2:
    kubectl PODS -w GET

  # namespaces allows sharing host and the pod:
  Pods:
    hostNetwork <Boolean > # make the network namespace Pod shared host machine, start Pod, direct access to the host 80 to access the container.
    hostPID <boolean> # this is actually Docker network model model shared network namespace with the host.
    hostIPC <boolean>

 

Guess you like

Origin www.cnblogs.com/wn1m/p/11287837.html