kubernetes常用控制器之DaemonSet

kubernetes常用控制器之DaemonSet

一、简介

DaemonSet保证在每个Node上都运行一个Pod,如果 新增一个Node,这个Pod也会运行在新增的Node上,如果删除这个DaemonSet,就会清除它所创建的Pod。常用来部署一些集群日志收集,监控等全局应用。

常见的场景如下:
1、运行存储集群daemon,比如ceph,glusterd等;
2、运行一个日志收集daemon,比如logstash,fluentd等;
3、运行监控daemon,比如Prometheus Node Exporter,collectd,New Relic agent,Ganglia gmond等;

二、调度策略

通常情况下,通过DaemonSet创建的的Pod应该调度到那个节点是通过Kubernetes调度策略决定的,然而,当这个Pod被创建的时候,运行在那个节点上其实已经被提前决定了,所以它会忽略调度器。因此:

  • DaemonSet控制器并不在乎Node的unschedulable字段;

  • 即使调度器没有启动,DaemonSet控制器都可以创建Pod;

但是可以通过以下方法来让Pod运行到指定的Node上:

  • nodeSelector:只调度到匹配指定label的Node上;

  • nodeAffinity:功能更丰富的Node选择器,比如支持集合操作;

  • podAffinity:调度到满足条件的Pod所在的Node上;

2.1、nodeSelector

就是给需要运行在的Node打标签,比如只运行在有ssd硬盘的node上,那么我们就可以给这些node打一个标签:

kubectl label nodes node-01 disktype=ssd

然后在DaemonSet的字段中定义nodeSelector为disktype=ssd:

spec:

  nodeSelector:

    disktype: ssd

2.2、nodeAffinity

nodeAffinity目前支持两种:requiredDuringSchedulingIgnoredDuringExecution和preferredDuringSchedulingIgnoredDuringExecution,分别代表必须满足条件和优选条件。比如下面的例子代表调度到包含标签kubernetes.io/e2e-az-name并且值为e2e-az1或e2e-az2的Node上,并且优选还带有标签another-node-label-key=another-node-label-value的Node。

apiVersion: v1

kind: Pod

metadata:

  name: with-node-affinity

spec:

  affinity:

    nodeAffinity:

      requiredDuringSchedulingIgnoredDuringExecution:

        nodeSelectorTerms:

        - matchExpressions:

          - key: kubernetes.io/e2e-az-name

            operator: In

            values:

            - e2e-az1

            - e2e-az2

      preferredDuringSchedulingIgnoredDuringExecution:

      - weight: 1

        preference:

          matchExpressions:

          - key: another-node-label-key

            operator: In

            values:

            - another-node-label-value

  containers:

  - name: with-node-affinity

    image: gcr.io/google_containers/pause:2.0

2.3、podAffinity

podAffinity基于Pod的标签来选择Node,仅调度到满足条件Pod所在的Node上,支持podAffinity和podAntiAffinity。这个功能比较绕,以下面的例子为例:

  • 如果一个“Node所在Zone中包含至少一个带有security=S1标签且运行中的Pod”,那么可以调度到该Node

  • 不调度到“包含至少一个带有security=S2标签且运行中Pod”的Node上
    apiVersion: v1

    kind: Pod

    metadata:

      name: with-pod-affinity

    spec:

      affinity:

        podAffinity:

          requiredDuringSchedulingIgnoredDuringExecution:

          - labelSelector:

              matchExpressions:

              - key: security

                operator: In

                values:

                - S1

            topologyKey: failure-domain.beta.kubernetes.io/zone

        podAntiAffinity:

          preferredDuringSchedulingIgnoredDuringExecution:

          - weight: 100

            podAffinityTerm:

              labelSelector:

                matchExpressions:

                - key: security

                  operator: In

                  values:

                  - S2

              topologyKey: kubernetes.io/hostname

      containers:

      - name: with-pod-affinity

        image: gcr.io/google_containers/pause:2.0

三、更新

DaemonSet支持滚动更新,其定义字段为updateStrategy,可以通过kubectl explain ds.spec.updateStrategy查看。

[root@master ~]# kubectl explain ds.spec.updateStrategy

KIND:     DaemonSet

VERSION:  extensions/v1beta1

RESOURCE: updateStrategy <Object>

DESCRIPTION:

     An update strategy to replace existing DaemonSet pods with new pods.

FIELDS:

   rollingUpdate    <Object>

     Rolling update config params. Present only if type = "RollingUpdate".

   type    <string>

     Type of daemon set update. Can be "RollingUpdate" or "OnDelete". Default is

     OnDelete.

其rollingUpdate字段只有一个maxUnavailable,没有maxSurge,因为DaemonSet只允许在node上运行一个。
DaemonSet的更新策略有两个:

  • RollingUpdate:滚动更新
  • OnDelete:当删除Pod的时候更新,默认的更新策略;

四、例子

定义一个收集日志的DaemonSet,使用filebeat收集日志,通过filebeat收集日志传给redis:

# vim filebeat-ds.yaml

    apiVersion: apps/v1

    kind: Deployment

    metadata:

      name: redis

      namespace: default

    spec:

      replicas: 1

      selector:

        matchLabels:

          app: redis

          role: cachedb

      template:

        metadata:

          labels:

            app: redis

            role: cachedb

        spec:

          containers:

          - name: redis

            image: redis:5.0.5-alpine

            ports:

            - name: redis

              containerPort: 6379

    ---

    apiVersion: v1

    kind: Service

    metadata:

      name: redis

      namespace: default

    spec:

      type: ClusterIP

      selector:

        app: redis

        role: cachedb

      ports:

      - port: 6379

    ---

    apiVersion: apps/v1

    kind: DaemonSet

    metadata:

      name: filebeat-ds

      namespace: default

    spec:

      selector:

        matchLabels:

          app: filebeat

          role: logstorage

      template:

        metadata:

          labels:

            app: filebeat

            role: logstorage

        spec:

          containers:

          - name: filebeat

            image: ikubernetes/filebeat:5.6.5-alpine

            env:

            - name: REDIS_HOST

              value: redis.default.svc.cluster.local 

我们创建这个YAML文件

# kubectl apply -f filebeat-ds.yaml

然后查看svc,pod的状态

[root@master daemonset]# kubectl get svc

NAME            TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE

kubernetes      ClusterIP   10.68.0.1      <none>        443/TCP          4d6h

redis           ClusterIP   10.68.213.73   <none>        6379/TCP         5s

[root@master daemonset]# kubectl get pods

NAME                     READY   STATUS    RESTARTS   AGE

filebeat-ds-pgmzt        1/1     Running   0          2m50s

filebeat-ds-wx44z        1/1     Running   0          2m50s

filebeat-ds-zjv68        1/1     Running   0          2m50s

redis-85c7ccb675-ks4rc   1/1     Running   0          4m2s

然后我们进filebeat容器搞一点测试数据:

# kubectl exec -it filebeat-ds-pgmzt -- /bin/bash

# cd /var/log/containers/

# echo "123" > a.log

然后进redis容器查看数据:

# kubectl exec -it redis-85c7ccb675-ks4rc -- /bin/sh

/data # redis-cli -h redis.default.svc.cluster.local -p 6379

redis.default.svc.cluster.local:6379> KEYS *

1) "filebeat"

redis.default.svc.cluster.local:6379> 

从上面可以看到Redis正常接收到filebeat传输过来的信息了。

猜你喜欢

转载自blog.51cto.com/15080014/2654563