jmeter command line run - distributed testing jmeter command line run - distributed testing

jmeter command line run - distributed testing

In the previous article, we talked about the jmeter command line running on a single node. The bottom layer of jmeter is developed in java, which consumes memory and cpu. If the project requires large concurrency to test the server, it is difficult for a single jmeter node to complete large concurrent requests. , then you need to perform distributed testing on jmeter:

1: Let’s talk about the principle of distributed testing

Process:

1: After the scheduler master is started, it will copy the local jmx file and distribute it to the remote slave machine;

Two: After the slave machine gets the script, start the command line mode to execute the script. The script obtained by each slave machine is the same, so if the jmx script runs for 50 threads for 3 minutes, then the actual concurrency is 50*3= 150 threads run concurrently for 3 minutes;

Three: During execution, the slave will transmit the data results obtained by the execution to the master machine, and the master machine will collect and summarize the information of all slave machines, so that there is a data result aggregated by all slave machines on the master machine.

 

Precautions:

One: We noticed that the jmx file will be copied to the slave machine after the master machine is started, so we do not need to upload a jmx on each slave machine, but only need to upload a jmx script on the master machine.

2: Parameterization file: If you use csv for parameterization, you need to copy the parameter file on each slave and the path needs to be set to the same.

Three: The scheduler (master) and the executor (slave) are best separated. Since the master needs to send information to the slave and will receive the test data returned by the slave, the mater will consume itself, so it is recommended to use a separate machine as the master.

Four: Ensure that the jmeter version and the plug-in version of each machine are the same to avoid some unexpected problems.

Five: The total number of samples for distributed testing = the number of threads * the number of cycles * the total number of executors, the sample count logic is: the test script executed by the executor slave is distributed by the scheduler master, so the test script executed by each executor is The same, so the total number of samples in the performance test = the number of test script samples * the total number of execution machines, and the number of test script samples is the number of threads * the number of loops

 

2: After talking about the principle, now let's talk about how to do jmeter distributed testing

  • To deploy java and jmeter on all machines that need to be distributed, it is required to ensure that the jmeter version deployed on each machine is the same and the plug-in version is the same. It is best to deploy it in the same path (so it is more convenient if there is csv parameterization)

Deploying jmeter is very simple, just download the corresponding version from the official website and upload it to the server for decompression. Here is my cloud disk address: http://pan.baidu.com/s/1bI3r2I Password: f5ll.

For example, I deployed on four machines: 134.64.14.95, 134.64.14.96, 134.64.14.97, 134.64.14.98, and the deployment path of each machine is: /home/tester

 

  • Modify the jmeter.properties configuration in the bin directory of the slave machine. My three slave machines are: 134.64.14.96, 134.64.14.97, 134.64.14.98

Modify the server_port port number in jmeter.properties in the jmeter/bin directory of the 3 slave machines to the port number that is not occupied by the machine. Generally, the default is 1099. Here I modified it to 7899 (you can use the default port or change it to other ports, as long as If it is not occupied), remote_hosts is 127.0.0.1 and does not need to be modified

The modification is completed and saved. The 3 machines I configured are:

134.64.14.96 machine (remote_hosts: 127.0.0.1, server_port: 7899)

134.64.14.97 machine (remote_hosts: 127.0.0.1, server_port: 7899)

134.64.14.98 machine (remote_hosts: 127.0.0.1, server_port: 7899)

 

  • After completing the configuration of the slave machine, configure the master machine at this time. One of my master machines is 134.64.14.95

Note that since the master machine as a scheduling machine itself will have a certain performance consumption, we did not configure the master machine when configuring the remote executor, but only configured 3 executors.

The modification is completed and saved, and the 1 machine I configured is:

134.64.14.95 machine (remote_hosts: 134.64.14.96:7899, 134.64.14.97:7899, 134.64.14.98:7899, server_port: comment out and do not open)

 

  •  After all the machine configurations are completed, we need to upload the test script. During the test, we only need to upload the jmx file to the master machine, that is, the jmeter corresponding directory of the 134.64.14.95 machine. Other execution machines do not need to upload the jmx file, because the master will copy it after startup. Local jmx to remote execution machine

 

  • Now let's start the distributed test. There are two steps to start the distributed test:

One: First start the execution machine, that is, the slave machines 134.64.14.96, 134.64.14.97, 134.64.14.98, each slave machine needs to execute the following commands to start jmeter-server

The command is: ./jmeter-server

 

2: After confirming that the three slave execution machines are all started correctly, start the master machine 134.64.14.95 and execute the following command to start the distributed test

命令为:./jmeter -n -t baidu_requests_results.jmx -r -l baidu_requests_results.jtl

 

3: Test command description

 ./jmeter -n -t baidu_requests_results.jmx -r -l baidu_requests_results.jtl 

n means run without GUI
t represents the jmx file to run
l refers to the generated file name
r means to start all agents remotely
 

4: Description of test results

Notice in the information printed on the console above

summary +   5504 in 00:00:02 = 3590.3/s Avg:     1 Min:     0 Max:   174 Err:  5504 (100.00%) Active: 59 Started: 58 Finished: 0
summary + 1224043 in 00:00:30 = 40802.8/s Avg:     0 Min:     0 Max:   188 Err: 1224043 (100.00%) Active: 60 Started: 59 Finished: 0
summary = 1229547 in 00:00:32 = 38989.9/s Avg:     0 Min:     0 Max:   188 Err: 1229547 (100.00%)
Parse:
summary is the number of requests, which refers to the number of requests increased within a certain period of time. From this, the following 3590.3/s, 40802.8/s, and 38989.9/s are calculated, which is the number of completed requests per second (throughput), and print a line at regular intervals. , the last line can see that the total number of requests is 8213739, and the average throughput is 45495.4/s (the number of completed requests per second)
summary = 8213739 in 00:03:01 = 45495.4/s Avg:     0 Min:     0 Max:   191 Err: 8213739 (100.00%)
In addition, active: 60 refers to the number of active threads. We use 3 machines to test concurrently. Each script runs for 20 threads for 3 minutes, so the number of online active threads is 20*3=60, and the running time is 3 minutes.
Error rate: It can be seen whether the server can withstand such a large amount of concurrency. Here are all 100% errors due to the same ip requesting Baidu concurrently in a short time. Baidu is not allowed, so it will be rejected and an error will occur.
 
Of course, in addition to looking at the information on the console, performance testing also needs to pay attention to: all test machines and server indicators of the tested machine, such as: cpu, disk io, memory consumption, etc., as well as server and client log information
For the generated jtl file, we can parse and generate the information we are concerned about, such as: throughput, response time, click rate, error rate, etc.
How to convert jtl files into charts and analysis is described in detail in my blog jmeter series of blog posts - jtl test report of jmeter 

In the previous article, we talked about the jmeter command line running on a single node. The bottom layer of jmeter is developed in java, which consumes memory and cpu. If the project requires large concurrency to test the server, it is difficult for a single jmeter node to complete large concurrent requests. , then you need to perform distributed testing on jmeter:

1: Let’s talk about the principle of distributed testing

Process:

1: After the scheduler master is started, it will copy the local jmx file and distribute it to the remote slave machine;

Two: After the slave machine gets the script, start the command line mode to execute the script. The script obtained by each slave machine is the same, so if the jmx script runs for 50 threads for 3 minutes, then the actual concurrency is 50*3= 150 threads run concurrently for 3 minutes;

Three: During execution, the slave will transmit the data results obtained by the execution to the master machine, and the master machine will collect and summarize the information of all slave machines, so that there is a data result aggregated by all slave machines on the master machine.

 

Precautions:

One: We noticed that the jmx file will be copied to the slave machine after the master machine is started, so we do not need to upload a jmx on each slave machine, but only need to upload a jmx script on the master machine.

2: Parameterization file: If you use csv for parameterization, you need to copy the parameter file on each slave and the path needs to be set to the same.

Three: The scheduler (master) and the executor (slave) are best separated. Since the master needs to send information to the slave and will receive the test data returned by the slave, the mater will consume itself, so it is recommended to use a separate machine as the master.

Four: Ensure that the jmeter version and the plug-in version of each machine are the same to avoid some unexpected problems.

Five: The total number of samples for distributed testing = the number of threads * the number of cycles * the total number of executors, the sample count logic is: the test script executed by the executor slave is distributed by the scheduler master, so the test script executed by each executor is The same, so the total number of samples in the performance test = the number of test script samples * the total number of execution machines, and the number of test script samples is the number of threads * the number of loops

 

2: After talking about the principle, now let's talk about how to do jmeter distributed testing

  • To deploy java and jmeter on all machines that need to be distributed, it is required to ensure that the jmeter version deployed on each machine is the same and the plug-in version is the same. It is best to deploy it in the same path (so it is more convenient if there is csv parameterization)

Deploying jmeter is very simple, just download the corresponding version from the official website and upload it to the server for decompression. Here is my cloud disk address: http://pan.baidu.com/s/1bI3r2I Password: f5ll.

For example, I deployed on four machines: 134.64.14.95, 134.64.14.96, 134.64.14.97, 134.64.14.98, and the deployment path of each machine is: /home/tester

 

  • Modify the jmeter.properties configuration in the bin directory of the slave machine. My three slave machines are: 134.64.14.96, 134.64.14.97, 134.64.14.98

Modify the server_port port number in jmeter.properties in the jmeter/bin directory of the 3 slave machines to the port number that is not occupied by the machine. Generally, the default is 1099. Here I modified it to 7899 (you can use the default port or change it to other ports, as long as If it is not occupied), remote_hosts is 127.0.0.1 and does not need to be modified

The modification is completed and saved. The 3 machines I configured are:

134.64.14.96 machine (remote_hosts: 127.0.0.1, server_port: 7899)

134.64.14.97 machine (remote_hosts: 127.0.0.1, server_port: 7899)

134.64.14.98 machine (remote_hosts: 127.0.0.1, server_port: 7899)

 

  • After completing the configuration of the slave machine, configure the master machine at this time. One of my master machines is 134.64.14.95

Note that since the master machine as a scheduling machine itself will have a certain performance consumption, we did not configure the master machine when configuring the remote executor, but only configured 3 executors.

The modification is completed and saved, and the 1 machine I configured is:

134.64.14.95 machine (remote_hosts: 134.64.14.96:7899, 134.64.14.97:7899, 134.64.14.98:7899, server_port: comment out and do not open)

 

  •  After all the machine configurations are completed, we need to upload the test script. During the test, we only need to upload the jmx file to the master machine, that is, the jmeter corresponding directory of the 134.64.14.95 machine. Other execution machines do not need to upload the jmx file, because the master will copy it after startup. Local jmx to remote execution machine

 

  • Now let's start the distributed test. There are two steps to start the distributed test:

One: First start the execution machine, that is, the slave machines 134.64.14.96, 134.64.14.97, 134.64.14.98, each slave machine needs to execute the following commands to start jmeter-server

The command is: ./jmeter-server

 

2: After confirming that the three slave execution machines are all started correctly, start the master machine 134.64.14.95 and execute the following command to start the distributed test

命令为:./jmeter -n -t baidu_requests_results.jmx -r -l baidu_requests_results.jtl

 

3: Test command description

 ./jmeter -n -t baidu_requests_results.jmx -r -l baidu_requests_results.jtl 

n means run without GUI
t represents the jmx file to run
l refers to the generated file name
r means to start all agents remotely
 

4: Description of test results

Notice in the information printed on the console above

summary +   5504 in 00:00:02 = 3590.3/s Avg:     1 Min:     0 Max:   174 Err:  5504 (100.00%) Active: 59 Started: 58 Finished: 0
summary + 1224043 in 00:00:30 = 40802.8/s Avg:     0 Min:     0 Max:   188 Err: 1224043 (100.00%) Active: 60 Started: 59 Finished: 0
summary = 1229547 in 00:00:32 = 38989.9/s Avg:     0 Min:     0 Max:   188 Err: 1229547 (100.00%)
Parse:
summary is the number of requests, which refers to the number of requests increased within a certain period of time. From this, the following 3590.3/s, 40802.8/s, and 38989.9/s are calculated, which is the number of completed requests per second (throughput), and print a line at regular intervals. , the last line can see that the total number of requests is 8213739, and the average throughput is 45495.4/s (the number of completed requests per second)
summary = 8213739 in 00:03:01 = 45495.4/s Avg:     0 Min:     0 Max:   191 Err: 8213739 (100.00%)
In addition, active: 60 refers to the number of active threads. We use 3 machines to test concurrently. Each script runs for 20 threads for 3 minutes, so the number of online active threads is 20*3=60, and the running time is 3 minutes.
Error rate: It can be seen whether the server can withstand such a large amount of concurrency. Here are all 100% errors due to the same ip requesting Baidu concurrently in a short time. Baidu is not allowed, so it will be rejected and an error will occur.
 
Of course, in addition to looking at the information on the console, performance testing also needs to pay attention to: all test machines and server indicators of the tested machine, such as: cpu, disk io, memory consumption, etc., as well as server and client log information
For the generated jtl file, we can parse and generate the information we are concerned about, such as: throughput, response time, click rate, error rate, etc.
How to convert jtl files into charts and analysis is described in detail in my blog jmeter series of blog posts - jtl test report of jmeter 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325869634&siteId=291194637