DaemonSet of kubernetes commonly used controller

DaemonSet of kubernetes commonly used controller

1. Introduction

DaemonSet guarantees to run a Pod on each Node. If a new Node is added, the Pod will also run on the newly added Node. If the DaemonSet is deleted, the Pod created by it will be cleared. Commonly used to deploy some cluster log collection, monitoring and other global applications.

Common scenarios are as follows:
1. Run a storage cluster daemon, such as ceph, glusterd, etc.;
2. Run a log collection daemon, such as logstash, fluentd, etc.;
3. Run a monitoring daemon, such as Prometheus Node Exporter, collectd, New Relic agent, Ganglia gmond etc.;

Two, scheduling strategy

Under normal circumstances, the Pod created by DaemonSet should be scheduled to which node is determined by the Kubernetes scheduling policy. However, when the Pod is created, which node is actually determined in advance, it will ignore the scheduling. Device. therefore:

  • The DaemonSet controller does not care about the unschedulable field of Node;

  • Even if the scheduler is not started, the DaemonSet controller can create a Pod;

However, the following methods can be used to make Pod run on the specified Node:

  • nodeSelector: only dispatch to the Node matching the specified label;

  • nodeAffinity: Node selector with richer functions, such as support for set operations;

  • podAffinity: Scheduling to the Node where the Pod that meets the conditions is located;

2.1、nodeSelector

Is to label the nodes that need to be run, for example, only run on nodes with ssd hard disks, then we can label these nodes:

kubectl label nodes node-01 disktype=ssd

Then define nodeSelector as disktype=ssd in the field of DaemonSet:

spec:

  nodeSelector:

    disktype: ssd

2.2、nodeAffinity

nodeAffinity currently supports two types: requiredDuringSchedulingIgnoredDuringExecution and preferredDuringSchedulingIgnoredDuringExecution, which represent the conditions that must be met and the preferred conditions, respectively. For example, the following example represents scheduling to the Node that contains the label kubernetes.io/e2e-az-name and the value is e2e-az1 or e2e-az2, and preferably also has the label another-node-label-key=another-node- Node of label-value.

apiVersion: v1

kind: Pod

metadata:

  name: with-node-affinity

spec:

  affinity:

    nodeAffinity:

      requiredDuringSchedulingIgnoredDuringExecution:

        nodeSelectorTerms:

        - matchExpressions:

          - key: kubernetes.io/e2e-az-name

            operator: In

            values:

            - e2e-az1

            - e2e-az2

      preferredDuringSchedulingIgnoredDuringExecution:

      - weight: 1

        preference:

          matchExpressions:

          - key: another-node-label-key

            operator: In

            values:

            - another-node-label-value

  containers:

  - name: with-node-affinity

    image: gcr.io/google_containers/pause:2.0

2.3、podAffinity

podAffinity selects the Node based on the Pod's label, and only schedules it to the Node where the Pod meets the conditions, and supports podAffinity and podAntiAffinity. This function is more convoluted, take the following example as an example:

  • If a "Zone where the Node is located contains at least one Pod with security=S1 label and running Pod", then it can be scheduled to the Node

  • Not scheduled to "contain at least one running Pod with security=S2 tag"
    apiVersion: v1

    kind: Pod

    metadata:

      name: with-pod-affinity

    spec:

      affinity:

        podAffinity:

          requiredDuringSchedulingIgnoredDuringExecution:

          - labelSelector:

              matchExpressions:

              - key: security

                operator: In

                values:

                - S1

            topologyKey: failure-domain.beta.kubernetes.io/zone

        podAntiAffinity:

          preferredDuringSchedulingIgnoredDuringExecution:

          - weight: 100

            podAffinityTerm:

              labelSelector:

                matchExpressions:

                - key: security

                  operator: In

                  values:

                  - S2

              topologyKey: kubernetes.io/hostname

      containers:

      - name: with-pod-affinity

        image: gcr.io/google_containers/pause:2.0

Three, update

DaemonSet supports rolling update, and its defined field is updateStrategy, which can be viewed through kubectl explain ds.spec.updateStrategy.

[root@master ~]# kubectl explain ds.spec.updateStrategy

KIND:     DaemonSet

VERSION:  extensions/v1beta1

RESOURCE: updateStrategy <Object>

DESCRIPTION:

     An update strategy to replace existing DaemonSet pods with new pods.

FIELDS:

   rollingUpdate    <Object>

     Rolling update config params. Present only if type = "RollingUpdate".

   type    <string>

     Type of daemon set update. Can be "RollingUpdate" or "OnDelete". Default is

     OnDelete.

The rollingUpdate field has only one maxUnavailable and no maxSurge, because DaemonSet only allows one to run on the node.
There are two update strategies for DaemonSet:

  • RollingUpdate: Rolling update
  • OnDelete: Update when deleting Pod, the default update strategy;

Four, example

Define a DaemonSet that collects logs, use filebeat to collect logs, and collect logs through filebeat and send them to redis:

# vim filebeat-ds.yaml

    apiVersion: apps/v1

    kind: Deployment

    metadata:

      name: redis

      namespace: default

    spec:

      replicas: 1

      selector:

        matchLabels:

          app: redis

          role: cachedb

      template:

        metadata:

          labels:

            app: redis

            role: cachedb

        spec:

          containers:

          - name: redis

            image: redis:5.0.5-alpine

            ports:

            - name: redis

              containerPort: 6379

    ---

    apiVersion: v1

    kind: Service

    metadata:

      name: redis

      namespace: default

    spec:

      type: ClusterIP

      selector:

        app: redis

        role: cachedb

      ports:

      - port: 6379

    ---

    apiVersion: apps/v1

    kind: DaemonSet

    metadata:

      name: filebeat-ds

      namespace: default

    spec:

      selector:

        matchLabels:

          app: filebeat

          role: logstorage

      template:

        metadata:

          labels:

            app: filebeat

            role: logstorage

        spec:

          containers:

          - name: filebeat

            image: ikubernetes/filebeat:5.6.5-alpine

            env:

            - name: REDIS_HOST

              value: redis.default.svc.cluster.local 

We create this YAML file

# kubectl apply -f filebeat-ds.yaml

Then check the status of svc, pod

[root@master daemonset]# kubectl get svc

NAME            TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE

kubernetes      ClusterIP   10.68.0.1      <none>        443/TCP          4d6h

redis           ClusterIP   10.68.213.73   <none>        6379/TCP         5s

[root@master daemonset]# kubectl get pods

NAME                     READY   STATUS    RESTARTS   AGE

filebeat-ds-pgmzt        1/1     Running   0          2m50s

filebeat-ds-wx44z        1/1     Running   0          2m50s

filebeat-ds-zjv68        1/1     Running   0          2m50s

redis-85c7ccb675-ks4rc   1/1     Running   0          4m2s

Then we enter the filebeat container to get some test data:

# kubectl exec -it filebeat-ds-pgmzt -- /bin/bash

# cd /var/log/containers/

# echo "123" > a.log

Then enter the redis container to view the data:

# kubectl exec -it redis-85c7ccb675-ks4rc -- /bin/sh

/data # redis-cli -h redis.default.svc.cluster.local -p 6379

redis.default.svc.cluster.local:6379> KEYS *

1) "filebeat"

redis.default.svc.cluster.local:6379> 

From the above, you can see that Redis has received the information transmitted by filebeat normally.

Finish

Guess you like

Origin blog.51cto.com/15080014/2654563