Kubernetes1.2 new function introduction: DaemonSet

http://www.dockerinfo.NET/1139.html

 

If you are using kubernetes to build your production environment, if you are looking for how to run a daemon (Pod) on each compute node, congratulations,

DaemonSet has the answer for you!

What is DaemonSet


DaemonSet enables all (or some specific) Node nodes to run the same pod. When a node is added to the kubernetes cluster, the pod will be scheduled to run on the node by (DaemonSet). When the node is removed from the kubernetes cluster, the pod scheduled by (DaemonSet) will be removed. The pods related to the DaemonSet will be deleted.

When using kubernetes to run applications, many times we need to run the same daemon process (pod) in a zone or all Nodes, such as the following scenarios:

  • Each Node runs a distributed storage daemon, such as glusterd, ceph
  • Run log collectors on each Node, such as fluentd, logstash
  • The collection end for running monitoring is in each Node, such as prometheus node exporter, collectd, etc.

In simple cases, a DaemonSet can cover all Nodes to realize the situation of Only-One-Pod-Per-Node; in some cases, we color different computing points, or cluster the kubernetes Nodes are divided into multiple zones, and DaemonSet can also implement Only-One-Pod-Per-Node on each zone.

How to use DaemonSet


Before introducing how to use DaemonSet in detail, let's take a scenario like this:

  • The java application has been containerized and run in the kubernetes cluster
  • It is necessary to collect and analyze the logs of java applications in real time, and monitor and warn the business.
  • Log collection is implemented using fluentd + kafka + elastic search + kibana
  • EFK technology stacks are containerized and run in kubernetes

How do we collect Java logs in such a specific scenario ? Assuming that there is no DaemonSet now, we have at least the following methods:

  1. Integrate the fluentd acquisition terminal into the docker image of each java application, and start with the application startup
  2. Run the fluentd collector and java application in a pod and start with the creation of the pod
  3. Run fluentd on each computing node (Node), output the output of the java application directly to the console, and fluentd collects the logs of all applications according to the rules

Comparing the above three methods, the first one is very cumbersome, and the application and fluentd are too tightly coupled. If you modify the configuration of fluentd, you need to rebuild the image; the second is very complicated. Different application logs correspond to different fluentd collection rules, and the coupling factor Still; the third way is the most simple and elegant, but how can I schedule fluentd to each compute node? Yes, it is DaemonSet.

 

http://blog.csdn.net/liukuan73/article/details/54710597

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326688435&siteId=291194637