Performance monitoring JMeter distributed stress testing lightweight logging solution

I. Introduction

In the previous article, we have introduced that when using JMeter non-GUI mode for stress testing, we can use InfluxDB+Grafana to monitor real-time performance test results, or we can use Tegraf+InfluxDB+Grafana to implement server performance monitoring. Although the Grafana dashboard can display the number of requests executed by the transaction and the failure rate. But we also want to know why it failed.

Not all HTTP request failures are caused by 500, sometimes they may be 200. The response assertion just checks whether the response data exists with the given string. If it is not satisfied, the request fails. But what is our actual response data during this time? It is important to know that debugging the application during performance testing is very important. We often use Alibaba Cloud or physical machine clusters for stress testing. Even if we record the response data in the log, we may not be able to obtain the data immediately. We can only wait for the stress test to end and access the host via ssh/ftp to check the logs. We cannot use InfluxDB to collect these large amounts of unstructured text data like performance test results. Because InfluxDB, as a time series database, is not designed for retrieving text.

One simple lightweight logging solution is to ElasticSearch+FileBeats+Kibanacollect and analyze error response data.

2. Background

1、Filebeat

Filebeat is a new member of the ELK protocol stack, a lightweight open source log file data collector, implemented in the GO language. Filebeat is installed on the server as an agent to monitor the log directory or specific log files, either forwarding the logs to Logstash for analysis, or directly sending them to ElasticSearch for indexing. Filebeat has complete documentation, simple configuration, and natural support for ELK. It provides default configuration, one-stop collection, analysis and display for logs generated by Apache, Nginx, System, MySQL and other services.

As shown below, the configuration of Filebeat is simple and easy to understand

filebeat:
spool_size: 1024                                    # 最大可以攒够 1024 条数据一起发送出去
idle_timeout: "5s"                                  # 否则每 5 秒钟也得发送一次
registry_file: "registry"                           # 文件读取位置记录文件,会放在当前工作目录下。
config_dir: "path/to/configs/contains/many/yaml"    # 如果配置过长,可以通过目录加载方式拆分配置
prospectors:                                        # 有相同配置参数的可以归类为一个 prospector
   -
       fields:
           log_source: "sample"                    # 类似logstash的 add_fields,此处的"log_source"用来标识该日志来源于哪个项目
       paths:
           - /var/log/system.log                   # 指明读取文件的位置
           - /var/log/wifi.log
       include_lines: ["^ERR", "^WARN"]            # 只发送包含这些字样的日志
       exclude_lines: ["^OK"]                      # 不发送包含这些字样的日志
   -
       document_type: "apache"                     # 定义写入ES时的 _type 值
       ignore_older: "24h"                         # 超过24小时没更新内容的文件不再监听。
       scan_frequency: "10s"                       # 每10秒钟扫描一次目录,更新通配符匹配上的文件列表
       tail_files: false                           # 是否从文件末尾开始读取
       harvester_buffer_size: 16384                # 实际读取文件时,每次读取16384字节
       backoff: "1s"                               # 每1秒检测一次文件是否有新的一行内容需要读取
       paths:
           - "/var/log/apache/*"                   # 可以使用通配符
       exclude_files: ["/var/log/apache/error.log"]
   -
       input_type: "stdin"                         # 除了 "log",还有 "stdin"
       multiline:                                  # 多行合并
           pattern: '^[[:space:]]'
           negate: false
           match: after
output.elasticsearch:
 hosts: ["127.0.0.1:9200"]                     # The elasticsearch host

 

The logs sent by Filebeat will contain the following fields:

  • beat.hostname: the host name where beat is running
  • beat.name: The name set in the shipper configuration section. If not set, it is equal to beat.hostname.
  • @timestamp: The time when the content of this line was read
  • type The content set by: document_type
  • input_type: from "log" or "stdin"
  • source: the full path of the specific file name
  • offset: the starting offset of the log line
  • message: log content
  • fields: Other fixed fields added are stored in this object.

2、Elasticsearch

Elasticsearch is an open source, highly scalable distributed full-text search engine that can store and retrieve data in near real-time. It has good scalability and can be expanded to hundreds of servers. Elasticsearch is strong in full-text search, and InfluxDB is good at time series data, so it is still necessary to analyze specific needs. If you need to save logs and query them frequently, Elasticsearch is more suitable, such as our JMeter log. If you only rely on logs for status display and occasional queries, InfluxDB is more suitable.

3. Kibana

Kibana is an open source analytics and visualization platform designed to work with Elasticsearch. Kibana provides the ability to search, view, and interact with data stored in Elasticsearch indexes. Users can easily perform advanced data analysis and visualize data in a variety of charts, tables, and maps. Fibana is not as beautiful as Grafana in chart display, but Kibana is very convenient to retrieve logs from Elasticsearch.

3. Overall structure

4. Log collection architecture

5. Installation and configuration

1. Download and configure ElasticSearch

You can directly refer to the tutorials on the official website. I won’t reinvent the wheel here. The official website tutorial address: www.elastic.co/downloads/e…

http://elasticsearch-host-ip:9200After the installation is complete, confirm that elasticsearch can be accessed by using

2. Download and configure Kibana

Refer to the official website tutorial: www.elastic.co/downloads/k…

Update the config/kibana.yml configuration file to obtain elasticsearch data. Run kibana.bat/.sh to ensure that you can access the http://kibana-host-ip:5601kibana homepage.

3. Download and configure FileBeat

Refer to the official website tutorial www.elastic.co/downloads/b ... We need to deploy a FileBeat node for each press. FileBeat is mainly responsible for collecting log data and sending it to elasticsearch for storage.

update filebeat.ymlfile

filebeat.inputs:
- type: log
 enabled: true
 paths:
   - D:\BaiduNetdiskDownload\jmeter\apache-jmeter-4.0\bin\jmeter.log
 multiline.pattern: ^[0-9]{4}-[0-9]{2}-[0-9]{2}
 multiline.negate: true
 multiline.match: after
output.elasticsearch:
 hosts: ["127.0.0.1:9200"]

By default, FileBeat logs each line in the log file as a separate log entry. Sometimes JMeter exceptions may span multiple lines. So we need to configure filebeat.yml using multi-line mode.

JMeter.log Each log entry has its timestamp (yyyy-MM-dd). So, we can configure the pattern to intercept starting from the timestamp, and if there is no timestamp, FileBeat can append the line to the previous line according to the configuration.

After starting FileBeat will start monitoring the log file, and whenever the log file is updated, the data will be sent to the ElasticSearch storage.

6. JMeter log collection

We created a very simple test as shown below, only with Debug Sampler, using BeanShell Assertion listener to write the return data in the log file in case of any error.

After the stress test starts, FileBeat will start collecting information from the log files and forward it to ElasticSearch storage. We can retrieve detailed logs through Kibana.

If we click on the small arrow to expand the details, the message section below will show the log details we are interested in.

 

7. Summary

In addition to live performance test results and real-time performance data, we are also able to collect response data for failed requests in real time. The above setup is very useful when we are testing long running distributed load. This setting helps us examine response data to understand the application's condition and test tool behavior when a request transaction suddenly fails.

Finally: The following is the supporting learning materials. For those who are doing [software testing], it should be the most comprehensive and complete preparation warehouse. This warehouse has also accompanied me through the most difficult journey. I hope it can also help you!

Software testing interview applet

A software test question bank that has been used by millions of people! ! ! Who is who knows! ! ! The most comprehensive interview test mini program on the Internet, you can use your mobile phone to answer questions, take the subway, bus, and roll it up!

Covers the following interview question sections:

1. Basic theory of software testing, 2. web, app, interface function testing, 3. network, 4. database, 5. linux

6. Web, app, interface automation, 7. Performance testing, 8. Programming basics, 9. HR interview questions, 10. Open test questions, 11. Security testing, 12. Computer basics

  How to obtain the full set of information: Click on the small card below to get it yourself

Guess you like

Origin blog.csdn.net/weixin_57794111/article/details/132759646