Software Performance Testing Insights and Summary

Insights and Summary of Software Performance
Testing I have been in the business for almost two years. In the past few years, I have witnessed the great development of software testing, and the demand for testing by enterprises has greatly increased. But as far as I know, most enterprises have a demand for testing. Most of them just stay in the black box testing stage, in fact, such testing is incomplete. For the test itself, I personally think that it can be divided into three types according to the different object-oriented.
1. System-oriented: that is, in the face of the system under test itself, this type of test is mainly to verify the completeness and soundness of the system under test. The specific test purpose is to verify whether the method function is correct, whether the function is normal, and whether the requirements are met, that is, it includes unit testing, integration testing, confirmation testing, system testing, and acceptance testing.
2. User-oriented: It refers to testing whether the system has defects from the perspective of users. This type of testing is mainly aimed at user experience. The specific performance lies in whether the interface is beautiful, whether the system is easy to use, whether it can be compatible with multiple devices, and whether it can respond quickly. The specific tests include UI testing, compatibility testing, and performance testing.
3. Enterprise-oriented: Consider more about whether the system has risks. The specific performance is data leakage, permission security, etc. Common tests include security tests.
For testing requirements, enterprise-oriented can be abandoned or postponed depending on the situation, but system-oriented and user-oriented are indispensable. Needless to say about the system, this is the most conventional black-box test and the most basic manifestation of system quality. In addition to the soundness and perfection of functions, we should also pay more attention to the user's experience, because from the user's standpoint, the most intuitive feeling of using a system is the beauty of the interface and the speed of response.
In view of what has been said before, here is a summary of some personal insights and conclusions about performance testing.
Performance testing is to test various performance indicators of the system by simulating a variety of normal, peak and abnormal load conditions through automated testing tools.
In layman's terms, it is to simulate the user's operating system through tools and then simulate the concurrent use of multiple users, by monitoring the server resource occupancy in the process and the usage of virtual users. In order to analyze the data to find out the current system performance bottleneck, and the cause of this situation. Well, we need to figure out a few things.
How is the tool simulated?
When we open a browser to access a web system, the browser will send a series of requests to the server of the web system (you can install a packet capture tool for viewing), and when the server receives the request, it will respond to each request and return the information to the browser. After the browser gets the returned data, it processes it and presents the page for us to refer to. In performance testing, the tool is equivalent to a browser, which can send requests to the server and receive the data returned from the server, but it does not process the data, but directly presents it to us. We can set the interfaces that need to be requested in order according to the operation, and then simulate the user to perform a certain action to operate the system. Then, by setting the number of concurrent users (virtual users), the scenario of simulating the simultaneous use of multiple users is carried out. Because of this, when we conduct performance tests, the scripts we design need to be as realistic as possible. Otherwise, the obtained data cannot reflect the real situation of the server.
How to perform performance testing?
Now that we understand how performance testing works, how do we go about testing? If you start from the test preparation stage, it will be too long and smelly. So here I will replace it with simplicity in stages.
1. Design scripts: Different tools have different implementations of script writing, but they all do the same thing, that is, when the tool runs the script, it can send requests to the server in the order of operations. In addition to the order, our script also needs various settings, such as parameterization, association, checkpointing, adding think time, etc. Because Http requests are stateless, if the function under test requires login to proceed, Also need to pay attention to adding Cookie or SessionID. After writing it, run it and experience the results.
2. Data analysis: After running, you will get a summary graph or detailed data of the running results. For data analysis, if you want to understand the bottleneck of the system, such as load testing, you should pay more attention to the resources of the server. If you want to know the processing power of the server, pay more attention to the response time, TPS, throughput and other data.
3. Scenario design: But in reality, it is impossible that all online users only operate a certain function of a certain module during the use of the system. At this time, we need to understand the general situation of each module used by the user in different time periods through various channels. Then incorporate the various situations into the script.
In addition, there are design writing of use cases and reports. A good use case may make the test more effective, and a good report can better reflect the test results.
This article is from: http://www.spasvo.com.cn/products/tc.asp

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326241689&siteId=291194637