Test development engineer, how can you quickly improve work efficiency? Come and try this new technology!

With the increasing development of Internet technology, test and development engineers must achieve the goal of "guaranteeing quality and improving efficiency", and the improvement of efficiency is more reflected in all aspects. As a test development engineer, you need to master basic development skills and have certain requirements for coding capabilities. This is also a strong guarantee for the project.

Insert picture description here

When a functional test encounters a BUG, ​​the test development engineer needs to debug the code in the compiler, tracing the source, while monitoring the quality of the code. The most important basis for this step of "tracking to the source" is the system output log, which is also the first inspection place for developers to locate problems. Therefore, in order to improve the efficiency of this part of the work, the editor wants to build a set of log collection, storage, and display tools through ELK to solve the current problems of low log viewing efficiency and lack of a visual interface.

1. What is ELK

ELK is spelled out by the initials of the three tools Elasticsearch, Logstash and Kibana. The following is a brief introduction to these tools.

(1)Logstash

Logstash is a data processing pipeline that can collect data from multiple different sources and send the data to where it is needed after transformation. It supports almost any type of log.

(2)Elasticsearch

Elasticsearch is a distributed, RESTful search and data analysis engine based on Lucene, which can store and retrieve data in near real-time and has good scalability.

(3) Kibana

Kibana is an open source analysis and visualization platform. It is a graphical interface for displaying log information. It can analyze and visualize log data stored in Elasticsearch indicators.

2. How to build an ELK channel

The overall architecture of the log management platform built with ELK tools is shown in Figure 1. Logstash is used to collect, analyze, and filter log information, and the collected information is sent to Elasticsearch after preprocessing and filtering; Elasticsearch provides log analysis and storage functions to realize full-text search; Kibana provides display interfaces for log information visualization functions, and displays stored in Log data in Elasticsearch is available for users to query and analyze.

Figure 1 Log management platform architecture diagram The
log collection and preprocessing comparison operations in Logstash are more complicated, requiring manual configuration files. Logstash consists of three parts: input plug-in (input), filter (filter), output plug-in (output), input plug-in collects logs from each module, filter plug-in preprocesses the collected logs, and output plug-in writes logs to the target . The detailed process of Logstash processing logs is shown in Figure 2. Before Logstash is started, a configuration file (.conf) needs to be written. The configuration file contains three plug-ins and their respective controls and other configurations.
Insert picture description here

Figure 2 Logstash log processing flow diagram
The configuration files of Elasticsearch and Kibana are relatively simple, only need to change the existing configuration file parameters.

Elasticsearch changes the configuration as follows:

http.port: 9200

http.cors.allow-origin: “/.*/”

http.cors.enabled: true

Kibana changes the configuration as follows:

elasticsearch.url: “http://localhost:9200”

The startup methods of the three tools are similar. Enter the bin directory of the software installation package and start them via the command line. Finally, enter the address of Kibana in the browser, the port number defaults to 5601, and the interface is shown in Figure 3.

Insert picture description here

Figure 3 Kibana interface display diagram

3. Thinking of using ELK

In the test environment, the logs of the system under test are collected and stored on the ELK platform, and finally viewed in the Kibana interface. Not only can the stored logs be retrieved in full text, but also the logs with the output level of ERROR can be directly located, as well as for Linux operations and IDE Testers who are not skilled in debugging can lower the threshold for locating problems through logs.

The log is a continuous output incremental file. In the subsequent use, the storage space problem should be considered. The design plan regularly solves the expired log; when the peak period of the log burst, the hardware conditions need to be adjusted. Configure parameters to increase the overall data transmission rate. For the above problems, the editor will continue to explore and try in the next practice to bring you more solutions.


END

Official account: Programmer Erhei, get software testing resources (interview questions, PDF documents, video tutorials)

Good things should be shared with friends

Guess you like

Origin blog.csdn.net/m0_53918927/article/details/113619365