Performance testing basis (two) concurrent users

Two cases of concurrency

  1. One is concurrency in the strict sense, that is, all users do the same thing or operation at the same time. This operation generally refers to doing the same type of business. For example, all users log in concurrently at the same time and submit form at the same time.
  2. Another type of concurrency is concurrency in a broad scope. The difference between this type of concurrency and the previous type of concurrency is that although multiple users have made requests or performed operations on the system, these requests or operations can be the same or can be different. For example, at the same time a user is logging in and a user is submitting a form.

Concurrency from the perspective of the server

The previous two explanations explain concurrency from the perspective of user business, because the performance tests we usually do are also conducted from the user side to the business layer operations.

If you consider the pressure on the server during the operation of the entire system, it is like this: During the operation of the system, the entire operation process is divided into discrete time points, and at each point there is a "simultaneous request to the server" The number of users", this is the number of concurrent accesses the so-called server bears.

Concurrency in the true sense does not exist

From the perspective of performance testing tools, although performance testing tools can simulate thousands of requests in 1 second, the generation of these requests is also divided into sequence. Even if these requests are truly "simultaneous" produced, when they are transmitted to the server via the network, due to the influence of network bandwidth and latency, they cannot truly constitute "simultaneous" requests to the server.

From the perspective of the server, when it receives concurrent requests, it also needs to process these requests in order, because the time it takes to process each request is extremely short; it can process tens of thousands of requests per second; therefore, we say it’s Concurrency capacity is per second/time.

(Note: It is assumed here that the simulated virtual user server and system server are single-core CPUs)

The number of system users and the number of people online at the same time

In actual performance testing, two concepts related to concurrency are often encountered: "number of system users" and "number of concurrent users."

Assuming there is a website, only registered users can log in and use various functions, such as uploading avatars, reading expert articles, etc. The system has 200,000 registered users, which means that 200,000 users can use all the functions of this website. 200,000 is the “number of system users” of this website. The website has an online statistics function, which can be seen from the statistics. The highest record of the number of people who log on to the website at the same time is 20,000, that is, 20,000 people open the website with a browser at the same time. 20,000 is the "number of people online at the same time." Does the system have 20,000 concurrent users? no! This 20,000 only means that so many users logged on to the website at the peak of the system, and it does not mean that the server is actually under pressure. Because the server's pressure is also related to the specific user access patterns, the number of requests issued to users at a certain point in time among these 20,000 users can be greatly reduced. So, what is the maximum number of concurrent accesses that the server of the system can endure? This depends on the number of concurrent business users and business scenarios, and can generally be obtained through analysis of server logs.

Guess you like

Origin blog.csdn.net/Python_BT/article/details/108749109