Awesome! JMeter simulates more than 50,000 concurrent users

From the perspective of load testing, this article describes what needs to be done to do a smooth concurrent test with 50,000 users.

You can see the record of the discussion at the end of this article.

Quick summary of steps

  1. Write your script
  2. Use JMeter for local testing
  3. BlazeMeter sandbox test
  4. Use one console and one engine to set the number of Users-per-Engine
  5. Set up and test your collection (1 console and 10-14 engines)
  6. Use Master / Slave features to achieve your maximum CC goal

Awesome!  JMeter simulates more than 50,000 concurrent users

 

Step 1: Write your script

Before you start, make sure to get the latest version from JMeter's Apache community jmeter.apache.org.

You will also want to download these additional plug-ins because they can make your work easier.

There are many ways to obtain scripts:

  1. Use BlazeMeter's Chrome extension to record your plan
  2. Use the JMeter HTTP(S) test script recorder to set up a proxy so that you can run your tests and record everything
  3. All hand-built from scratch (maybe functional/QA testing)

If your script is the result of a record (like steps 1 & 2), please keep in mind:

  1. You need to change certain parameters such as Username & Password, or you may want to set up a CSV file, with the values ​​in it, each user can be different.
  2. In order to complete such requests as "add to shopping cart", "login" and other such requests, you may need to use regular expressions, JSON path extractor, XPath extractor to extract strings such as Token strings, form building IDs and others Element
  3. Keep your scripts parameterized and use configuration elements, such as default HTTP requests, to make your job easier when switching between environments.

Step 2: Use JMeter for local testing

Use the view result tree element, debug sample, virtual sample and open log viewer (some JMeter errors will be reported in it) in 1 iteration of 1 thread to debug your script.

Go through all scenarios (including True or False responses) to make sure the script behaves as expected...

After successfully using one thread to test-increase it to 10 minutes and continue testing with 10 to 20 threads:

  1. If you want each user to be independent-is that true?
  2. Did you receive an error?
  3. If you are doing a registration process, then look at your back-end-are the accounts created according to your template? Are they independent?
  4. From the summary report, you can see the statistics of the tests-are they useful? (Average response time, errors, hit rate per second)

Once you have prepared the script:

  1. Clean up the script by removing any debugs and dummy samples, and delete your script listener
  2. If you use a listener (such as "Save response to a file"), please make sure you are not using any path!, and if it is a listener or a CSV data set configuration-make sure you are not using your The path used locally-and only the file name (as if it is in the same folder as your script)
  3. If you use your own proprietary JAR file, please make sure it is also uploaded.
  4. If you use more than one thread group (not the default one)-make sure to set this value before uploading it to BlazeMeter.

Step 3: BlazeMeter sandbox test

If it was your first test-you should review this article on how to create a test in BlazeMeter.

Set the test configuration of the sandbox to 300 users, 1 console, and 50 minutes.

This configuration of the sandbox allows you to test your scripts in the background and ensure that everything on BlazeMeter is running well.

To do this, first press the gray button: Tell the JMeter engine that I want full control!-to gain full control of your test parameters.

Common problems you will encounter:

  1. Firewall-make sure your environment is developed against BlazeMeter's CIDR list (they will be updated in real time) and put them in the whitelist
  2. Make sure all your test files, such as: CSVs, JAR, JSON, User.properties, etc... are all available
  3. Make sure you are not using any path

If there are still problems, then look at the error log (you should be able to download the entire log).

The configuration of a sandbox can look like this:

  • Engine: Yes to enable the console (1 console, 0 engines)
  • Thread: 50-300
  • Capacity increase: 20 minutes
  • Iteration: keep testing
  • Time: 30-50 minutes

This allows you to get enough data during the production ramp-up period (in case you encounter problems), and you will be able to analyze the results to ensure that the script executes as expected.

You should observe the Waterfall / WebDriver tab to see if the request is normal. You shouldn't have any problems at this point (unless you deliberately).

You should stare at the monitoring tab and observe the period of memory and CPU consumption-this is the number of users for each engine you tried to set in step 4.

Step 4: Use 1 console and 1 engine to set the number of users per engine

Now we can be sure that the script will run perfectly in BlazeMeter-we need to figure out how many users we need to put in an engine.

If you can make this decision with the data in the user sandbox, that would be great!

Here, I will give a way to calculate this number without looking back at the sandbox test data.

Set up your test configuration:

  • Number of threads: 500
  • Capacity increase: 40 minutes
  • Iteration: permanent
  • Duration: 50 minutes

Use one console and one engine.

Run the test and monitor your test engine (via the monitor tab).

If your engine has not reached 75% CPI usage and 85% memory usage (the one-time peak can be ignored):

  • Adjust the number of threads to 700 and test once
  • Submit the number of threads until the number of threads reaches 1000 or 60% of the CPU or memory usage

If your engine has passed 75% CPU usage or 85% memory usage (one-time peaks can be ignored:

  • Look at the point where you reach 75% for the first time and how many concurrent users there are at that point.
  • Run a test instead of increasing your previous capacity of 500 users
  • This time put the capacity increase into the real test (5-15 minutes is a good start) and set the duration to 50 minutes.
  • Make sure that there is no more than 75% CPU usage or 85% memory usage during the entire test...

For safety, you can reduce the number of threads per engine by 10% .

Step 5: Install and test the cluster

We now know how many threads we get from an engine. At the end of this chapter, we will know how many users a cluster can provide us.

A cluster refers to a logical container with one console (only one) and 0-14 engines.

Even if you can create a test case that uses more than 14 engines-it actually creates two clusters (you can notice the increase in the number of consoles), and clones your test case...

Each cluster has a maximum of 14 engines, which is based on BlazeMeter's own tests to ensure that the console can control the pressure of these 14 engines on newly-built massive data processing.

So in this step, we will use step 4 of the test, and only modify the number of engines to increase it to 14.

Run the test for the full duration of the final test. When the test is running, open the monitor tab and verify:

  1. No engine exceeds the upper limit of 75% of CPU and 85% of memory;
  2. Locate your console label (you can find its name by clicking Logs Tab->Network Information once and looking at the console's private IP address)-it should not reach the upper limit of 75% of CPU and 85% of memory.

If your console reaches the limit-reduce the number of engines and run again until the console is below the limit.

At the end of this step, you will find:

  1. The number of users in each cluster;
  2. The hit rate of each cluster.

View other statistics in the Aggretate Table, and find the local result statistics graph to get more information about your cluster throughput.

Step 6: Use Master / Slave features to achieve your maximum CC goal

We have reached the last step.

We know that the script is running, and we also know how many users an engine can support and how many users a cluster can support.

Let's make some assumptions:

  • One engine supports 500 users
  • A cluster can user 12 engines
  • Our goal is to test 50,000 users

So in order to accomplish this, we need 8.3 clusters..

We can use 8 clusters of 12 engines and a cluster of 4 engines-but it should be better to spread the load like this:

We use 10 engines for each cluster instead of 12, then each cluster can support 10*500 = 5K users and we need 10 clusters to support 50,000 users.

This can get the following benefits:

  1. No need to maintain two different test types
  2. We can increase 5K users by simply duplicating the existing cluster (5K is more common than 6K)
  3. We can keep increasing as long as we need

Now, we are ready to create the final Master/Slave test with 50,000 users:

  1. Change the name of the test from "My prod test" to "My prod test-slave 1".
  2. Let's go back to step 5 and modify Standalone under Advanced Test Properties to Slave.
  3. Press the save button-now we have a Master and one of the 9 Slaves.
  4. Return your "My prod test -slave 1".
  5. Press the copy button
  6. Then repeat steps 1-5 until you have created 9 slaves.
  7. Go back to your "My prod test -salve 9" and press the copy button.
  8. Change the name of the test to "My prod test -Master".
  9. Change Slave under Advanced Test Properties to Master.
  10. Check all the slaves we just created (My prod test -salve 1..9) and press save.

Your Master-Slave test for 50,000 users is ready. Run 10 tests by pressing the start button on the master, each with 5,000 users.

You can modify any test (salve or master) so that they come from different regions, have different scripts/csv/ and other files, use different network simulators, different parameters, etc.

You can find the generated aggregated results report in a new tab page in a master report called "Master load results". You can also view each test result independently by opening a single report.

Reprinted at: https://mp.weixin.qq.com/s/9NEg13zLGQnVIroXm7evAw

Guess you like

Origin blog.csdn.net/qq_45401061/article/details/108740852