Cloud native log architecture practice: Netease Shufan open source Loggie's three lives and three lives

Introduction: NetEase has been exploring and practicing cloud native since 2015. As an important part of observability, the log platform has also undergone the evolution from host to container, supporting the large-scale cloud nativeization of various business departments within the group. Retrofit. This article will describe the problems we encountered in this process, how to evolve and transform, and what experiences and best practices have been precipitated.

The main contents include:

  • Operator-based log collection
  • Dilemmas and challenges in large-scale scenarios
  • The Present and Future of Open Source Loggie

01 Cloud Native's initial exploration of  operator-based log collection

In the early days, when the internal business of the company ran on the physical machine, the log collection tools, log transfer and storage, and configuration distribution used by each business were confusing, and there were many types of choices. Based on the Qingzhou platform, the company has promoted the containerization and cloud nativeization of various services. With the continuous migration of services to K8s, we have accumulated a lot of practical experience in collecting K8s container logs and log platforms.

What does cloud native log look like, and what is the difference between container log collection and service log collection on the host? First, Pods are frequently migrated in K8s, and it is impractical to manually change the configuration log collection configuration on the node. At the same time, there are various forms of log storage in the cloud native environment, including container standard output and HostPath, EmptyDir, PV, etc. In addition, after the logs are collected, they are generally retrieved and filtered according to the dimensions of the cloud native environment, such as Namespace, Pod, Container, Node, and even container environment variables and Label. These meta-information need to be injected during log collection.

Common cloud native environment log collection has the following three ideas:

1. Only capture standard output

Although the operation of printing logs to standard output conforms to the twelve elements of cloud native, many businesses are still used to outputting logs to log files, and only collecting standard output logs cannot meet the needs of some complex businesses. The conventional way to collect standard output logs is to mount directories under /var/lib/docker and /var/log/pods. At the same time, it is necessary to pay attention to whether the selected log agent supports the injection of k8s meta information.

2. The Agent log path is globally mounted

Logs can be stored in EmptyDir, HostPath or PV storage, and the log collection agent can be deployed to each k8s node through DaemonSet. EmptyDir and other storage directories mapped to nodes can be configured to the Agent for collection by wildcarding, and the Agent reads the log files in the configuration path.

But this general matching method cannot be configured individually according to the service level. If you need to configure only the logs of some services to be collected, or the format of an application is different from that of other applications, for example, the log of an application is a log of multiple lines, you need to modify the configuration separately, and the management cost is relatively high.

3. Sidecar method

Each business Pod is deployed with an Agent SideCar for log collection, and the log collection Agent can be injected into the Pod through the container platform or k8s webhook. The Agent can collect the logs of the service container by mounting the same HostPath or EmptyDir.

The disadvantage of this method is that it is highly intrusive to the Pod, and when the number of Pods is large, the resource consumption is large.

 

Comparing the two deployment methods of DaemonSet and SideCar, DaemonSet has advantages in stability, intrusiveness, and resource occupation, but the SideCar method is better in terms of isolation and performance. In practice, we will use the DaemonSet method to collect logs first. If a service has a large volume of logs and a high throughput, consider configuring the sidecar mode to collect logs for the Pod of the service.

In response to the pain points of log collection, the first preliminary solution we put forward is: the self-developed Ripple Operator cooperates with the open source log collection client Filebeat to collect business logs.

In terms of deployment, FileBeat and Ripple are deployed together through SideCar, and Ripple monitors k8s. In terms of usage, users can use CRDs, a CRD can represent the log collection configuration of a service, and Ripple can interact with k8s to perceive the Pod life cycle. Ripple can deliver configuration to Agent, automatically add Pod-related meta information and inject it into logs. In terms of storage, HostPath, EmptyDir, PV, and Docker Rootfs can all be supported.

The log collection architecture in the figure was first applied in NetEase Yanxuan, and then gradually implemented by most departments in the company. The specific architecture will vary according to the actual environment. First, the log platform issues the configuration. After Ripple monitors the change of the CRD configuration of K8s, it sends the configuration to Filebeat. Filebeat sends the collected logs to the transit machine, the transit machine sends it to Kafka, and Flink consumes Kafka data for processing and forwards it to the back-end log storage.

02 Difficulties and challenges in large-scale scenarios of entering deep water

As more and more businesses are accessed, the log architecture of version 1.0 gradually exposes some problems: system stability problems under ultra-large scale, log platform performance problems, troubleshooting difficulties and increasing troubleshooting requirements, maintenance labor cost straight line raise etc.

In the architecture described earlier, Filebeat will send the logs of each service to the relay. However, Filebeat is a single queue structure, and the blocking of the transit machine will affect the Filebeat log reporting of the entire node. We tried to transform Filebeat into a multi-queue mode to enhance isolation, but due to the limitations of Filebeat's own design architecture, the effect and running status of the reconstruction were not ideal, and there were great difficulties in maintaining and upgrading the open source version.

Filebeat only supports a single Output. When users need to send logs to multiple Kafka clusters, they can only deploy multiple Filebeats on the same node. This leads to high operation and maintenance costs, large consumption of computing resources, and difficulty in horizontal expansion.

Filebeat was originally designed to solve the problem of Logstash. It runs as a lightweight log collection terminal and is not suitable for transit. Logstash performs well in log segmentation, parsing, etc., but the problem is its weak performance. Although Flink has excellent performance and performance, many of our scenarios only require simple log segmentation. Flink generally relies on the support of the streaming processing platform. At the same time, during the delivery process, users have to pay extra for Flink's machines and operation and maintenance costs, so we A lighter weight and lower cost solution is needed.

In addition, the observability and stability of the log collection process are also important. During use, users often encounter problems such as logs not being collected as expected, delayed log collection, lost logs, and excessively fast growth of log volume. Filebeat does not provide monitoring indicators such as these, and additionally deploying prometheus exporter is required.

Let's take an online question as an example. An online cluster often encounters a situation where the disk usage surges to more than 90% in a short period of time and an alarm is triggered. After the actual login node to check the problem, it is found that the reason for the alarm is that the downstream Kafka throughput is not enough, and some log files are not completely collected. The Filebeat process retains the file handle by default until the file collection is completed. Since the file handle is not released, the underlying file cannot be really deleted. . But in this case, Filebeat lacks metrics such as input latency, sending latency, and accumulation, which affects stability.

In a high-traffic node, we found that the overall data processing speed was not ideal when Filebeat sent data to downstream Kafka, so we made many optimization attempts, such as changing the Kafka cluster of SSD, changing the Kafka configuration, and optimizing the Filebeat configuration. We found that when Filebeat does not turn on data compression, it is difficult to improve the maximum data sending speed after reaching 80MB/s, and the CPU consumption of Filebeat increases sharply after data compression is turned on. The optimization effect of adjusting Kafka or Filebeat parameters, machines and other configurations is not obvious.

So in addition to Filebeat, can other log collection tools meet the needs? The current mainstream log collection agents are not ideal for log collection in containerized scenarios. Most open source log collection tools only support the collection of standard output of containers, and there is no overall solution for log collection in cloud native.

03 New  Journey Open Source Loggie's Present and Future

Based on the situation of Filebeat and existing mainstream data collection tools, we decided to develop our own log collection agent, Logie. It is a lightweight, high-performance, cloud-native log collection and aggregation agent based on Golang, and supports multiple pipelines and hot-plugging of components. The overall architecture is shown in the figure.

We have strengthened the monitoring and stability of the Agent, and can use the Monitor EventBus to report the running status of the Agent, while exposing the callable Restful API and the interface for pulling monitoring data from Prometheus. In terms of configuration management, K8S is currently used for configuration distribution and management, and more configuration centers will be accessed for users to manage and configure in the future.

Loggie provides a one-stack log solution, and the new Agent no longer maintains the Agent and the transit machine separately. As a pluggable component, Agent can be used as both a log collection terminal and a transfer machine, and supports log transfer, filtering, parsing, segmentation, and log alarming. Loggie also provides a complete solution for cloud-native log collection, such as using K8s capabilities to deliver configuration and deployment in the form of CRDs. Based on our long-term experience in large-scale operation and maintenance log collection services, Loggie has accumulated all-round observability, rapid troubleshooting, abnormal early warning, and automated operation and maintenance capabilities. Loggie is developed based on Golang and has very good performance. It occupies small resources, has excellent throughput performance, and has high development efficiency and low maintenance costs.

1. One-stack logging solution

Agent can be used as both a log collection client and a transit machine. In terms of log transfer, we implement log distribution, aggregation, dumping, parsing, and segmentation by implementing the logic of the Interceptor module. The log collection agent configuration can be delivered through the CRD of K8s. The overall maintenance cost of the new log collection architecture is lower.

Before adopting Loggie, we had two alarm solutions: the first one is based on ElastAlert polling Elasticsearch for log alarming. This solution has problems such as manual configuration for configuration delivery, inability to automate alarm access, and imperfect high availability; the second It is based on Flink parsing keywords or performing regular matching on logs to alert. As we have already introduced, Flink is relatively heavyweight for the overall log collection service.

After using Loggie, we can use the logAlert Interceptor to identify info or error by keyword matching during the log collection process. You can also poll Elasticsearch by deploying the Loggie Aggregator of the Elasticsearch Source separately, and send the information to Prometheus for alarming. The new version of the log alarm architecture is lighter and easier to maintain.

In terms of project development, we adhere to the principles of microkernel, plug-in, and componentization, and abstract all components into components. By implementing the lifecycle interface, developers can quickly develop data sending logic, data source reading logic, processing logic, and service registration logic. This design enables faster and more flexible implementation of requirements, greatly improving the development efficiency of the log collection side.

Based on the flexible and efficient Agent data source design, we can configure the kubeEvent source to collect K8S Event data as a supplement to monitoring and alarming. Compared with re-implementing a K8S Event collection project, you only need to implement the corresponding source on Loggie, and other logics such as queue caching, parsing logs, and sending logs can be reused.

2. Cloud native log form

 

By configuring the LogConfig CRD, users can specify to collect Pod, node, and cluster log information. The example in the figure is to match the Pod to be collected through labelSelector, and configure the log path in the container; through Sink CRD and Interceptor CRD, log sending, log processing, and parsing methods can be configured.

Through the CRD method to open up the log processing link, the user can configure the CRD to define the log transfer, aggregation and sending scheme, and can configure the log analysis, current limit, and alarm rules, which greatly improves the privatization deployment and platformization of log services. The complexity of configuration management is reduced. Once there is a configuration change, k8s can quickly synchronize the configuration to the log collection service.

In k8s, we provide a variety of deployment solutions. When deploying as an Agent, we provide DaemonSet or SideCar for deployment. When deployed as a transit node, it can be deployed as a StatefulSet. The architecture of log collection is more flexible, and it can be sent directly to ES, or sent to Kafka and then handed over to the transit node, which is easy to deploy.

3. Production-grade features

We have added comprehensive indicators to the log collection agent, including log collection progress, unfinished long-term collection, collection and delivery delay, FD number, output QPS and other service-level indicators, and provided native Prometheus format interfaces, RestAPI, and indicator sending Kafka, etc. way to expose indicators.

The stability and troubleshooting of large-scale deployment of the log collection service have been greatly improved. We enhance service isolation through the independent collection task Pipeline, and provide QPS limit through Interceptor, regularly clean up logs to prevent disk full, increase log file burst growth detection and reasonable Fd retention mechanism, etc.

4. Resources and Performance

 

We used Filebeat to conduct benchmark tests and send logs to Kafka under the scenario of processing a single line, a single file, the same sending concurrency, and no parsing.

According to the test results, Loggie's performance has been greatly improved. The CPU consumption is 1/4 of that of Filebeat, the throughput is 1.6-2.6 times that of Filebeat, and the limit throughput and the 80MB/s bottleneck of Filebeat have been increased to more than 200MB/s.

04 Loggie Open Source Project

Loggie has been open sourced. The following is the RoadMap that our project is promoting and what we are doing. Interested developers are welcome to participate in the project.

Loggie project address: https://github.com/loggie-io/loggie/

Guest speaker: Fu Yi, head and architect of NetEase Shufan Qingzhou Logging Platform

Currently focusing on the research and development of NetEase Shufan Qingzhou cloud-native log platform, dedicated to cloud-native technology and its ecosystem construction and commercialization, has in-depth research on Kubernetes, Serverless, observability, etc., and has rich cloud-native distributed architecture design Development experience and project practice.

Editing and finishing: Zhang Detong Treelab

First launch platform: DataFunTalk

Original title: NetEase Shufan Cloud Native Log Platform Architecture Practice

{{o.name}}
{{m.name}}

Guess you like

Origin my.oschina.net/u/4565392/blog/5488140