六步在 Kubernetes 上搭建 EFK 日志收集系统

版权声明: https://blog.csdn.net/shadow2017/article/details/86537655

六步在 Kubernetes 上搭建 EFK 日志收集系统

 

一:先创建一个命名空间,我们将在其中安装所有日志相关的资源对象。

 

# Vim kube-logging.yaml

 

kind: Namespace

apiVersion: v1

metadata:

  name: kube-logging

 

# kubectl create -f kube-logging.yaml

 

如果创建成果会输出一下内容:

Namespace /kube-logging created

 

 

查看命名空间命令:

#kubectl get namespaces

 

 

二:接下来可以部署EFK相关组件:

 

#vim elasticsearch_svc.yaml

kind: Service

apiVersion: v1

metadata:

  name: elasticsearch

  namespace: kube-logging

  labels:

    app: elasticsearch

spec:

  selector:

    app: elasticsearch

  clusterIP: None

  ports:

    - port: 9200

      name: rest

    - port: 9300

      name: inter-node

注释:

定义了一个名为 elasticsearch 的 Service,指定标签 app=elasticsearch,当我们将 Elasticsearch StatefulSet 与此服务关联时,服务将返回带有标签 app=elasticsearch的 Elasticsearch Pods 的 DNS A 记录,然后设置 clusterIP=None,将该服务设置成无头服务。最后,我们分别定义端口9200、9300,分别用于与 REST API 交互,以及用于节点间通信

 

使用 kubectl 直接创建上面的服务资源对象:

#kubectl create -f elasticsearch-svc.yaml

#service/elasticsearch created

#kubectl get services --namespace=logging

 

二:创建StatefulSet

#vim elasticsearch_statefulset.yaml

apiVersion: apps/v1

kind: StatefulSet

metadata:

  name: es-cluster

  namespace: kube-logging

spec:

  serviceName: elasticsearch

  replicas: 3

  selector:

    matchLabels:

      app: elasticsearch

  template:

    metadata:

      labels:

        app: elasticsearch

定义了一个名为 es-cluster 的 StatefulSet 对象,然后定义 serviceName=elasticsearch和前面创建的 Service 相关联,这可以确保使用以下 DNS 地址访问 StatefulSet 中的每一个 Pod: es-cluster-[0,1,2].elasticsearch.logging.svc.cluster.local,其中[0,1,2]对应于已分配的 Pod 序号。

然后指定3个副本,将 matchLabels 设置为 app=elasticsearch,所以 Pod 的模板部分 .spec.template.metadata.lables也必须包含 app=elasticsearch标签。

然后定义 Pod 模板部分内容:

 

 

# vim elasticsearch_statefulset.yaml

apiVersion: apps/v1

kind: StatefulSet

metadata:

  name: es-cluster

  namespace: kube-logging

spec:

  serviceName: elasticsearch

  replicas: 3

  selector:

    matchLabels:

      app: elasticsearch

  template:

    metadata:

      labels:

        app: elasticsearch

    spec:

      containers:

      - name: elasticsearch

        image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.4.3

        resources:

            limits:

              cpu: 1000m

            requests:

              cpu: 100m

        ports:

        - containerPort: 9200

          name: rest

          protocol: TCP

        - containerPort: 9300

          name: inter-node

          protocol: TCP

        volumeMounts:

        - name: data

          mountPath: /usr/share/elasticsearch/data

        env:

          - name: cluster.name

            value: k8s-logs

          - name: node.name

            valueFrom:

              fieldRef:

                fieldPath: metadata.name

          - name: discovery.zen.ping.unicast.hosts

            value: "es-cluster-0.elasticsearch,es-cluster-1.elasticsearch,es-cluster-2.elasticsearch"

          - name: discovery.zen.minimum_master_nodes

            value: "2"

          - name: ES_JAVA_OPTS

            value: "-Xms512m -Xmx512m"

      initContainers:

      - name: fix-permissions

        image: busybox

        command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]

        securityContext:

          privileged: true

        volumeMounts:

        - name: data

          mountPath: /usr/share/elasticsearch/data

      - name: increase-vm-max-map

        image: busybox

        command: ["sysctl", "-w", "vm.max_map_count=262144"]

        securityContext:

          privileged: true

      - name: increase-fd-ulimit

        image: busybox

        command: ["sh", "-c", "ulimit -n 65536"]

        securityContext:

          privileged: true

  volumeClaimTemplates:

  - metadata:

      name: data

      labels:

        app: elasticsearch

    spec:

      accessModes: [ "ReadWriteOnce" ]

      storageClassName: do-block-storage

      resources:

        requests:

          storage: 100Gi

# kubectl create -f elasticsearch_statefulset.yaml

创建成功会输出:

statefulset.apps/es-cluster created

 

kubectl rollout status sts/es-cluster --namespace=kube-logging

 

Output

Waiting for 3 pods to be ready...

Waiting for 2 pods to be ready...

Waiting for 1 pods to be ready...

partitioned roll out complete: 3 new pods have been updated...

kubectl port-forward es-cluster-0 9200:9200 --namespace=kube-logging

 

#curl http://localhost:9200/_cluster/state?pretty

Output

{

  "cluster_name" : "k8s-logs",

  "compressed_size_in_bytes" : 348,

  "cluster_uuid" : "QD06dK7CQgids-GQZooNVw",

  "version" : 3,

  "state_uuid" : "mjNIWXAzQVuxNNOQ7xR-qg",

  "master_node" : "IdM5B7cUQWqFgIHXBp0JDg",

  "blocks" : { },

  "nodes" : {

    "u7DoTpMmSCixOoictzHItA" : {

      "name" : "es-cluster-1",

      "ephemeral_id" : "ZlBflnXKRMC4RvEACHIVdg",

      "transport_address" : "10.244.8.2:9300",

      "attributes" : { }

    },

    "IdM5B7cUQWqFgIHXBp0JDg" : {

      "name" : "es-cluster-0",

      "ephemeral_id" : "JTk1FDdFQuWbSFAtBxdxAQ",

      "transport_address" : "10.244.44.3:9300",

      "attributes" : { }

    },

    "R8E7xcSUSbGbgrhAdyAKmQ" : {

      "name" : "es-cluster-2",

      "ephemeral_id" : "9wv6ke71Qqy9vk2LgJTqaA",

      "transport_address" : "10.244.40.4:9300",

      "attributes" : { }

    }

  },

...

 

三:创建kibana Deployment and Service

 

#vim kibana.yaml

apiVersion: v1

kind: Service

metadata:

  name: kibana

  namespace: kube-logging

  labels:

    app: kibana

spec:

  ports:

  - port: 5601

  selector:

    app: kibana

---

apiVersion: apps/v1

kind: Deployment

metadata:

  name: kibana

  namespace: kube-logging

  labels:

    app: kibana

spec:

  replicas: 1

  selector:

    matchLabels:

      app: kibana

  template:

    metadata:

      labels:

        app: kibana

    spec:

      containers:

      - name: kibana

        image: docker.elastic.co/kibana/kibana-oss:6.4.3

        resources:

          limits:

            cpu: 1000m

          requests:

            cpu: 100m

        env:

          - name: ELASTICSEARCH_URL

            value: http://elasticsearch:9200

        ports:

        - containerPort: 5601

 

四:创建kibana:

#kubectl create -f kibana.yaml

查看kibana Pod的运行状态

#kubectl rollout status deployment/kibana --namespace=kube-logging

kubectl get pods --namespace=kube-logging

#kubectl port-forward kibana-6c9fb4b5b7-plbg2 5601:5601 --namespace=kube-logging

浏览器测试:

http://localhost:56.1

 

 

五:部署Fluentd

Fluentd 是一个高效的日志聚合器,Fluentd 足够高效并且消耗的资源相对较少.

这里使用daemonSet创建:

 

# vim fluentd.yaml

 

apiVersion: v1

kind: ServiceAccount

metadata:

  name: fluentd

  namespace: kube-logging

  labels:

    app: fluentd

---

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRole

metadata:

  name: fluentd

  labels:

    app: fluentd

rules:

- apiGroups:

  - ""

  resources:

  - pods

  - namespaces

  verbs:

  - get

  - list

  - watch

---

kind: ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1

metadata:

  name: fluentd

roleRef:

  kind: ClusterRole

  name: fluentd

  apiGroup: rbac.authorization.k8s.io

subjects:

- kind: ServiceAccount

  name: fluentd

  namespace: kube-logging

---

apiVersion: apps/v1

kind: DaemonSet

metadata:

  name: fluentd

  namespace: kube-logging

  labels:

    app: fluentd

spec:

  selector:

    matchLabels:

      app: fluentd

  template:

    metadata:

      labels:

        app: fluentd

    spec:

      serviceAccount: fluentd

      serviceAccountName: fluentd

      tolerations:

      - key: node-role.kubernetes.io/master

        effect: NoSchedule

      containers:

      - name: fluentd

        image: fluent/fluentd-kubernetes-daemonset:v0.12-debian-elasticsearch

        env:

          - name:  FLUENT_ELASTICSEARCH_HOST

            value: "elasticsearch.kube-logging.svc.cluster.local"

          - name:  FLUENT_ELASTICSEARCH_PORT

            value: "9200"

          - name: FLUENT_ELASTICSEARCH_SCHEME

            value: "http"

          - name: FLUENT_UID

            value: "0"

        resources:

          limits:

            memory: 512Mi

          requests:

            cpu: 100m

            memory: 200Mi

        volumeMounts:

        - name: varlog

          mountPath: /var/log

        - name: varlibdockercontainers

          mountPath: /var/lib/docker/containers

          readOnly: true

      terminationGracePeriodSeconds: 30

      volumes:

      - name: varlog

        hostPath:

          path: /var/log

      - name: varlibdockercontainers

        hostPath:

          path: /var/lib/docker/containers

 

#kubectl create -f fluentd.yaml

#kubectl get pods -n logging

#kubectl get ds --namespace=kube-logging

 

Fluentd 启动成功后,我们可以前往 Kibana 的 Dashboard 页面中,点击左侧的 Discover,可以看到如下配置页面:

 

http://localhost:5601.

 

 

六 测试:

新建一个counter.yaml文件:

#vim counter.yaml

apiVersion: v1

kind: Pod

metadata:

  name: counter

spec:

  containers:

  - name: count

    image: busybox

    args: [/bin/sh, -c,

            'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done']

 

#kubectl  create -f counter.yaml

 

该 Pod 只是简单将日志信息打印到 stdout,所以正常来说 Fluentd 会收集到这个日志数据,在 Kibana 中也就可以找到对应的日志数据了

参考内容来自:

https://www.digitalocean.com/community/tutorials/how-to-set-up-an-elasticsearch-fluentd-and-kibana-efk-logging-stack-on-kubernetes

猜你喜欢

转载自blog.csdn.net/shadow2017/article/details/86537655