Software Quality and Testing Essays by Baidu Engineers

Author | Baidu Mobile Ecological Quality and Efficiency Engineers

guide

In the context of reducing costs and increasing efficiency, and the emergence of large-scale model technology represented by chatGPT, it has also brought a huge impact on the fields of software quality and software testing, and it has also made software quality workers become anxious, mainly reflected in : The company's lack of attention to software quality practitioners has intensified, and some development or quality behaviors that pursue temporary delivery are not uncommon. Based on this, I have recently summarized the relevant ideas of software quality work for more than 10 years, hoping to help practitioners see some directions and make more reasonable judgments in a complex and changeable environment.

The article hopes to use popular language to explain the understanding of software quality and testing, so as to better guide practitioners to carry out software quality and testing work, and to understand the content and significance of daily work as a software quality worker, and even understand why To do so, first explain the following points:

1. The article will not talk about the implementation of testing technology on a large scale;

2. The article is limited to software quality and testing, and will not expand on other related issues in detail.

Hopefully reading this article will bring you:

1. From the perspective of software quality, accurately judge what is the contribution and value of what you do;

2. From the perspective of software quality, understand why you do what you do and why it serves the purpose;

3. Objectively understand the relationship between software quality and testing;

4. Pay attention to software quality and testing technology. Some things that do not see benefits in the short term do not mean they are not important;

5. Testing technology is full of challenges, with depth and breadth. As a tester, you should have this confidence;

6. It can prompt practitioners to think more.

The full text is 19,098 words, and the expected reading time is 48 minutes.

01 What is quality

Baidu Encyclopedia definition: the degree to which a set of inherent characteristics of the object meet the requirements.

In people's consciousness, the expected performance of things that do not meet the expectations is related to quality, but it is very difficult to exhaustively enumerate. The main reason is that quality is a very abstract word. However, as a quality worker, it is still necessary to improve the understanding of quality to guide the work of quality workers. For this reason, it is necessary to change the understanding of quality from abstract to descriptive and concrete.

Based on the summary of software quality work in recent years, the following briefly introduces the content of quality understanding.

02 Understand software quality from the three levels of availability, ease of use, and love

Software quality is a relatively abstract term. In the field of software engineering, the number of problems and bugs are generally used to measure the quality of software, and the number of bugs has a lot to do with test recalls. Therefore, many times, when people talk about quality, they will directly It is normal to think of testing and QAer, so it is normal to think that quality = testing = QA. Next, I will talk about several different understandings of software quality from a "value"-driven perspective, which may be helpful to follow-up work:

The first level of understanding of software quality - usable

Ensure the availability of software services, that is, provide products or services that allow users/customers to use them normally. By describing quality in this way, it becomes more specific. This level of understanding will drive the focus of quality assurance or testing activities to focus on service stability, SLA and function correctness, whether the strategy meets the original intention, etc. This is currently the most invested in the field of software quality. False, because this gives the QA the most basic functions. Let’s think about it again. From the perspective of the company that provides the service, the core goal of ensuring software availability is to reduce the loss caused by service unavailability. In fact, in the end, it is necessary to look at the loss caused by quality problems to the business. Therefore, the jargon of practitioners is control Loss: control business loss and experience problems caused by functions that do not meet expectations. The value of QAer’s work at this stage lies in ensuring delivery and controlling business losses. In this way, what you see is not a simple number of problems. The impact of any problem on business loss is what practitioners should pursue and study.

The second level of understanding of software quality - easy to use

From the perspective of quality driving more value, quality can actually go a step further under the premise of ensuring the availability of services, that is, easy to use: that is, the products or services provided not only meet the basic demands of users/customers, but also more It focuses on how the service satisfies them and how the experience is. The focus of quality at this level will gradually shift from ensuring the stability and correctness of the service to improving the user/customer experience. From this perspective, quality will actually bring more real value to the business other than losses, such as retention, duration, ecological closed-loop improvement, etc. The goal is to stabilize the business, from loss control to business stabilization. It is particularly emphasized that the retention method at the software quality level is different from the retention method at the business PM level. The business PM design is designed to attract users with more useful functions from the perspective of functional integrity, while the quality level is designed from the perspective of user experience Abstract potential problems from a perspective to drive product and technology improvements. For example, for the client APP, the sorted out understanding of ease of use:

1. The product disturbs the user - whether the upgrade pop-up window is reasonable;

2. The impact of the product on the user's hardware and physical sense - the loss and performance of CPU, memory, network, disk, etc.;

3. The product satisfies the user's needs—such as the deviation between the released functions and the user's habits.

The value of QAer at this stage lies in ensuring delivery and stabilizing business development.

The third level of understanding of software quality - love to use

Love is to let users/customers become the spokesperson of the service, and actively recommend the service to the surroundings. This core understanding of how to tap users, tap potential needs, and then transform them into value pursued by the business gradually leads to the same goal as PM work, but the way of doing work is different. Provide services in terms of crawling, user behavior profiling and analysis, etc. Therefore, the focus of quality attention is different at the level of love and use. The means is how to tap more potential users and needs on the premise of ensuring usability and ease of use, so as to promote continuous business growth. The value of QAer at this stage lies in ensuring delivery and promoting business growth.

To sum up, it is mentioned that software quality can be understood from the three levels of availability, ease of use, and love. Let me make a brief summary:

1. The usable core is to control the business loss caused by the problem. The value of QAer's work lies in ensuring delivery and controlling business loss; the usable core is to improve experience and drive sustainable business development. The value of QAer lies in ensuring delivery and stabilizing business development; love The core of use is to tap users to drive positive business growth, and the value of QAer lies in ensuring delivery and promoting business growth;

2. From the perspective of users/customers, the levels of the three are: serve existing users/customers well, retain more users/customers, and tap potential users/customers;

3. This division endows the abstract word quality with a more vivid understanding in the field of software engineering, which is convenient for everyone to understand the work of software quality.

From the perspective of quality development, the three are carried out in sequence, so the quality of the general chat is still at the usability level. The following articles are also mainly explained from the perspective of ensuring service availability.

03 Interpretation of software quality modeling process

Why do software quality models mainly hope that through dismantling the model, we can see what factors affect software quality, so that software quality workers can better arrange work and find the attribution of good quality work, and finally form a positive cycle. Based on the thinking that the goal of ensuring availability is to control business loss, the software quality model constructed is:

picture

Suppose there is a software system as shown in the figure above, and there are six sub-services A, B, C, D, E, and F. The relationship diagram is as described in the above figure. To calculate the loss expectation E caused by various changes in sub-service E, the formula is as follows:

picture

The quality model is the loss caused by the problem to the business: (number of changes_problem density (probability)_(R&D leakage rate)*(test leakage rate)*problem handling level)

\(E\) : The quality model is the loss caused by the problem to the business: it indicates the loss caused by all changes to the business within a period of time, which may be business loss, user loss, etc.;

\(M\) : Represents the total number of changes of the service in a certain period of time, i represents the i-th change, and the change type may be different. Including requirements changes, online vocabulary changes, operation and maintenance operations, business operations, hardware/business changes, dependent business changes, etc. This level of consideration determines the breadth of quality attention, and should be avoided as much as possible. Any changes may affect damage to the system;

\(S_{i}\) : Indicates the i-th change, generally represented by 1 here;

\(\rho _{i}\) : The i-th change, the probability of generating a problem (that is, the problem density), pay special attention, as the complexity of the system increases, the probability of generating a problem not only includes the problem for itself, but also for the rest Service impact, such as the problem density of service E in the figure , should include the problem impact on C and A. This value is obtained through posterior regression, similar to the total number of problems/number of changes; this value reflects the impact of R&D on The generated bug level of the system;

\(\gamma _{i}\) : R&D leakage rate: number of R&D leakage issues/total number of issues

Number of R&D leakage issues: online issues + QA test recall issues after testing, 1-R&D leakage rate is the R&D recall; this value can reflect the self-test level of R&D personnel;

\(\lambda _{i}\) : test leakage rate, total number of online problems/number of R&D leakage problems, 1-test leakage rate is the test recall rate, this value reflects the level of recall problems of testers;

\(D_{i}\) : Fault impact surface, causing loss per unit time;

\(MTTR_{i}\) : Fault processing time, the operation time from the occurrence of the problem to the completion of the final stop loss.

Fault impact surface (loss per unit time) * fault resolution time (MTTR) is collectively referred to as problem resolution level.

The comprehensive result of problem density and R&D recall can be called R&D quality, which is a relatively empty concept.

Quality cost: In order to control the business loss caused by a certain change, the cost of resources consumed, including human resources, IT resources, etc., specifically includes: the cost of developers investing in quality activities (design and development for quality considerations), Problem recall costs, location and repair costs, additional IT resource costs to improve robustness, etc. Special mention should be made here of the recall cost: it should include all quality activities to ensure the change, and there is a certain difference from the cost of positioning and repair, because these two costs are carried out on the premise that there are quality problems that have been determined, and there is no need to consider Ineffective factor.

3.1 Factors of change

Change factors include demand changes, online vocabulary changes, operation and maintenance operations, business operations, hardware/business changes, dependent business changes, etc. These factors will directly or indirectly affect the system, which may lead to business losses Influence, quality workers need to consider more comprehensive change factors as much as possible, which has the following advantages:

1. From the perspective of quality, besides the impact of the code itself, there are more factors to consider. The recall is more comprehensive and less passive.

2. From the perspective of efficiency, QAer's responsibility is not only to deliver "requirements", but also to ensure the normal operation of the online system and the normal operation of the entire business system brought about by changes in "requirements".

It is important to mention here that some changes are directly operated on the online system, so offline simulation is particularly difficult and unrealistic. Therefore, the necessary approval, process and checker for online operations need to be strengthened.

3.2 Question Density

Problem density physically reflects the level of defects in a system. Problem density is related to many factors such as: architecture design, technical debt, development capabilities, system resilience, business complexity, dependency, etc. This indicator truly reflects a R&D team. At the technical level of quality, from the perspective of research and development, more attention should be paid to the construction and improvement of this indicator. Quality built-in is a matter worth studying, especially the subtle influence of architecture and technical debt, which is easily overlooked.

3.3 Research and Development Recall

R&D recall reflects the recall level of a R&D team for quality problems in production codes. This indicator can reflect quality awareness. Businesses have to accept short-term implementation due to historical or emergency situations. At this time, QA can provide auxiliary tools such as testing services, test cases, and evaluation to assist R&D in self-testing. It should be noted that it is not necessary to recall all problems in oriented research and development, but the basic functional correctness and system robustness need to be guaranteed, while regression, interaction and other expensive work can be handed over to professional QA, and research and development is more focused on architecture and technology In terms of innovation, the two should perform their duties and have a clear division of labor, so such an indicator is needed.

Problem density + R&D recall: can be collectively referred to as R&D quality, that is, R&D can improve delivery quality by reducing problem density + self-test

3.4 Test recall

As mentioned earlier, R&D recall has its own type of recall problem, and QAer is the main force for the bottom-up and regression testing of new functions. For new functions, it is more about checking for leaks and filling in gaps; as the complexity of business and modules increases, sufficient attention and strengthening should be paid to regression, which is easily overlooked by developers and QAer. In the case of more abundant types of quality model changes, the test recall cannot be limited to offline, so three points need to be mentioned in particular:

1. Strengthen online testing;

2. Strengthen the review and improvement process of online operations;

3. Strengthen online risk detection.

Why? First of all, the system will gradually have some potential new problems with the external environment, code accumulation and hardware aging, and system problems may explode, which is a typical performance bottleneck problem; secondly, the simulation of offline tests will naturally vary greatly, so Necessary online testing and risk inspections are inevitable. The test recall is the most invested by QAer, so the test will be introduced in the following chapters.

3.5 Treatment level

Faults occur no matter how tight the recalled network is. Under the premise that failures are inevitable, what QAer should do is to minimize the loss of business caused by failures. Controlling the loss caused by faults to business reflects the system's handling level of faults. There are two main factors causing losses:

1. Fault impact surface : defined as the amount of loss caused to the business per unit time; try to control the scope of the fault impact, which also includes traffic and the diffusion of the computer room to other businesses. Therefore, in the traditional sense, there are single-sandbox, single-machine, single-room checkers and online monitoring. These means hope to perceive as soon as possible and control the impact. Therefore, when the system will inevitably produce many problems, or the offline recall system is not so sound, the control of the impact of the failure will generally become the first choice of the team. But this situation is not advisable. Everyone knows the truth of "after all, you often walk by the water, so you can't get your shoes wet", and even if the impact is limited to a small area, it still has an impact on the business.

2. Fault recovery time : As the name suggests, it is to quickly stop the fault to reduce business loss. In order to better disassemble the corresponding things, the fault recovery time is generally further disassembled into MTTA, MTTI, MTTO, etc. A series of indicators, which respectively reflect the fault perception time (from occurrence to the time when people start to follow up and deal with it), the start time of fault stop loss operation (that is, the time from the occurrence of a fault to the start of deciding what stop loss operation to do), and the fault operation time (that is, from the start of operation to the end of the fault) Recovery Time)

MTTA : Fault perception time, the time from occurrence to being followed up by people, the basic technical direction investment, in fact, checker and monitoring, this is the technical direction that QAer has been investing in all the time, how to alarm and trigger more comprehensively and with more flexible thresholds The person who has reached this point has always been the key to research in this direction.

MTTI : Fault stop loss decision time, that is, according to the fault appearance, system correlation, change, etc., to decide the stop loss behavior to be carried out by R & D or OP. This depends on the strategy and relies more on the system performance behavior data, making it more convenient to make decisions .

MTTO : the time from start of operation to recovery from a fault, which reflects the professionalism of the operator and the self-recovery ability of the system, and has a lot to do with the architecture, such as the start-up time of the fault, whether there is a dependency on start-up, etc. Any 1 minute may be Cause huge losses, the architecture needs special attention.

From the perspective of controlling business loss, the level of processing needs to gradually enter the overall vision of QAer, and begin to strengthen orientation and technical investment. At present, the research and understanding of this direction are gradually deepening, and this article will not expand the relevant content for the time being.

3.6 Cost of Quality

Quality cost: Define the human and resource cost consumed in the process to make the object have better quality.

By definition, costs include: R&D architecture development and maintenance, test recall (QAer and machine), online problem alarm, decision-making and operation and maintenance, fault location and problem repair costs, all of which need to be taken into account, from the perspective of the QAer team, It may be more to consider the cost of test recall, online problem alarm and decision-making.

There are two main reasons why the cost of quality is proposed here:

1. Under the background of reducing costs and increasing efficiency, it is impossible to invest unlimitedly in anything. In particular, two points need to be paid special attention to: one is invalid investment, and the other is low-efficiency investment. In a world where quality is guaranteed and available, these two A realistic state exists: not all changes have quality problems; not all behaviors will reveal mistakes, which requires QAer to try to eliminate such work waste. Regarding inefficiency, it is hoped to find a way to recall with a higher ROI. For example, it is obvious that automation can be pursued, but it has been unwilling to invest.

2. If you want to make the quality absolutely good, it means that you need to test indefinitely, but this is obviously unrealistic. The core is to reduce the probability of failure and quickly deal with it, so there needs to be a balance. This balance may be to take the cost of quality into account.

It can be seen from the above process that to do a good job in quality control, all links can carry out work, so how to choose the most reasonable way to carry out quality work requires consideration of cost factors, so how to adjust the recall distribution of quality to ensure quality Controlling investment will be a topic that QAer has always needed to study under the background of reducing costs and increasing efficiency, that is, returning to a ROI control formula for quality cost and loss.

To sum up, the content of the quality chapter mainly includes:

1. From the three levels of service availability, ease of use, and love to use, describe software quality more concretely to deepen the actual understanding of quality;

2. From the perspective of controlling business losses, a quality model that affects losses from various aspects such as "requirements", development, self-test, testing, and problem control has been established, so that everyone can see the factors and roles that affect quality more clearly, which is convenient for practitioners Those who want to better understand their own positioning and work content in the field of quality;

3. In the context of reducing costs and increasing efficiency, quality cost is a factor that cannot be ignored. It needs to be gradually reduced and taken into account to better judge what method to use for recalls to achieve the best results.

04 Chat test

From the following chapters, begin to focus on the test content.

4.1 What is testing

Definition: A set of behaviors to reveal the potential problems of the object or a set of behavioral activities to expose the problems of the tested object as early as possible and as much as possible in various ways

In the software field, objects generally include: functions, classes, modules (entities that can run independently), local subsystems, back-end entire systems, APPs, SDKs, etc.

There are generally two understandings:

VE (Verification) : Verification

  • correct, objective

  • For example, does it meet the requirements

  • Generally refers to various functional tests

VA (Validation) : confirmation

  • Does not meet expectations, subjective

  • For example, is it what the user wants

  • Generally refers to various types of user experience evaluation

4.2 Relationship between quality and testing

  • Quality≠Test

  • Testing is only used to feedback quality, not to directly improve quality

  • The test feedbacks the current quality situation and indirectly promotes quality improvement

  • Testing is an important part of quality assurance work

  • Quality is not measured, quality assurance work exists in every link, and every member is responsible for quality

  • How to find the problem earlier The core is to adjust the distribution, the internal driving force of quality cost

All these relationships should be able to establish relationships after reading the test chapter on the premise of understanding quality.

4.3 Key factors affecting test results

Look at the test process from the definition of the test:

1. Let the object run under the corresponding conditions

2. Set the corresponding instruction set or behavior set for the object

3. Send the corresponding instruction set to the object and collect the return or behavior data of the object performance

4. Relying on the return or data, observe the performance of the object in the corresponding environment and behavior set, and find problems

5. When a problem occurs, locate the problem and find the root cause of the problem

6. Finally, evaluate what conditions and behaviors are missing to make up for the means

1 and 2 are called test input, 3 is called test execution, 4 is called test analysis, 5 is called test positioning, and 6 is called test evaluation, so the test is divided into five stages: test input, test execution, test analysis, and test Orientation and test assessment.

4.4 Test input

Definition: For the simulation and instruction set construction of the object runtime, the maximum coverage and the actual service state of the simulated object; the usual understanding is that the test plan and test case set include functions, UI, etc., environment, traffic, and interface.

Physical meaning: determines the upper limit of the recall problem.

Specific test behaviors include:

If the object is a function : the input parameter of the function + combination, the combination of function dependencies.

If the object is a module : functional logic test case, UI use case, test plan, configuration authenticity + abnormal combination, flow authenticity and abnormal combination, environment simulation (hardware, software, connection configuration, vocabulary, configuration and other dimensions) .

If the object is a subsystem/APP : configuration authenticity, process authenticity, and environment simulation (hardware, software, connection configuration, vocabulary, configuration, etc.) of all subsystems.

If the object is a client APP : the difference between the authenticity of the behavior reflected in the use case and the online one, the authenticity of the configuration of the APP, etc.

To do a good job of test input, firstly, objectively describe the object, secondly, build a test system with a high enough degree of system simulation under the premise of characterization, and finally construct an instruction set for system input. Traditionally, these three The last step is the most concerned in the human-led test process, but there are still researches on the impact of test recall and simulation. The role that people can play in these two points is limited, so it needs to be solved by technology, such as The construction of software knowledge graph, the improvement of environment simulation, etc.

4.5 Test Execution

Definition: The process of running the corresponding instruction set on the object under test and collecting the behavioral performance data of the object.

Physical meaning: It determines the efficiency of the recall, that is, how much resources and time need to be consumed to recall the problem.

Specific test behaviors include: pressure release, data collection and storage, execution of use cases, etc.

When the use case and environment are well designed, and the boring execution is completely carried out by people, everyone will naturally think that this method is not desirable, so technology is generally introduced for automatic execution, which is also the most researched aspect in the current testing technology field, that is, How to use technology to automate testing.

4.6 Test Analysis

Definition: Through the analysis of various dimensions, observe the performance of the object to discover potential problems.

Physical meaning: Under the premise of corresponding test input, the level to reach the upper limit of the recall problem is determined.

Specific actions include:

Level 1: Whether it is normal: mainly to analyze whether the object is still healthy and alive, which is a prerequisite for the object to play a role; the common ones are: whether to quit, whether to kill, whether to refuse, etc.

Level 2: Whether there is: mainly to analyze whether the output attributes of the object meet the expected existence; common is: the output must have time, advertisements, or certain elements, etc.

Level 3: Right or wrong: mainly to analyze whether the output attribute of the object is correct as expected; common is: the output 1+1 must be 2, etc.

Level 4: Good or bad: mainly to analyze whether the output attributes of the object have any deviation in experience; common ones are: performance deterioration, resource leakage, policy effect deterioration, meeting user expectations, etc.

For different objects, the behavior to be analyzed may be different, so it needs to be considered comprehensively in the actual process.

For the four levels, the difficulty of analysis increases in turn and becomes more abstract, and the analysis ability is also the most difficult place to break through in testing AI.

It can be seen from the definition that test analysis is to observe the performance of the system to judge the problems of the system. Generally, people-centered can only see the performance visible to the naked eye of the system, but when the problem has already appeared but has not yet been manifested or ignored by people Under normal circumstances, this kind of problem is often ignored, so it is necessary to use technology to capture various performance data of the system, conduct data analysis, and finally come to a judgment on whether there is a problem.

4.7 Test location

Definition: When the test analysis finds that there is a problem with the object, according to the change behavior and environmental factors, the cause of the problem is located, which is convenient for quick repair

Physical meaning: It determines the repair efficiency, that is, to accurately find the root cause of the problem after a problem occurs.

Specific testing behaviors include: root cause location of problems, construction failure identification and self-healing, etc.

Positioning is a very high-input job. If the job is done well, it will greatly improve work efficiency, so it is also the direction that technology has always wanted to break through.

4.8 Test Evaluation

Definition: According to the potential risk analysis and the behavioral coverage set of uncovering activities, the possible potential risks are pointed out through the model;

Physical meaning: Check and fill in gaps from a third-party perspective.

Specific actions include:

1. Based on changes, system topo and other descriptions, determine the size of the existing risks

2. Obtain test input and analyze behavioral activity data

3. Use the model to estimate the potential risk of not fully uncovering errors, display it with a visual report, and guide testers to increase recall activities

Test evaluation is the final step in risk decision making and recall.

However, test evaluation is often ignored by test workers, mainly because with the continuous improvement of automation level, the inertia of seeing whether the report is passed is formed, and it is often overlooked that automation execution is more about the accumulation of previous experience (without intelligence) In the case of test generation), it is difficult to completely uncover the problems caused by new changes or changes accumulated with the development of the system. Therefore, test evaluation should gradually enter the field of vision of test workers, but the evaluation work requires sufficient data to carry out Analysis and judgment. Therefore, test evaluation relies more on technology + models, and finally relies on people to continuously mine features for continuous and accurate decision-making.

4.9 Test input from test recall

1. Study the object under test, that is, which instruction sets and operating environments will affect the behavior of the object under test, so as to make a good test input. As mentioned before, the test input determines the upper limit of the recall problem of the test activity, so this is the starting point of the recall , such as the complete simulation of traffic, the simulation of topo, etc.;

2. Study the object under test, that is, from which potential changes will affect the behavior of the object under test, and then give input to the test evaluation. Only by fully understanding the introduction of risk can we better evaluate the risk, such as the different degrees of impact of code changes on the object under test ;

3. Study the object under test, that is, which dimensions of the object under test can reflect its abnormal behavior. This is the starting point of test analysis and the core link to reach the maximum number of recall problems, such as curve fitting of memory leaks, performance fluctuations, etc. .

Therefore, it is very important to conduct research around the object, followed by using various generation, analysis, and evaluation tools to find ways to uncover errors.

Therefore, to understand traditional automation from this perspective, that is, the execution is more biased towards test execution and test positioning, which will not improve the recall ability. The level of test recall is still determined by the written use cases and traffic simulation. Often overlooked by the three words of automation. Automation is equal to automated execution. Many times when automation is implemented, everyone focuses on coverage, success rate, recall ability, etc., but in fact it should include continuous investment in test generation, test analysis and evaluation, and then use automation technology to integrate all these Ability to run routinely (historical automation is written by humans, but continuous technical investment is required to improve, otherwise the recall ability will always stay at a certain past time, which will lead to automation over time become tasteless). Why automated testing is still performed may be better understood from the perspective of quality cost, which will be described in more detail later.

4.10 Looking at testing objectively

It is impossible for the test to recall all problems for the following main reasons:

1. Due to the different time and environment of the object, there are inevitable differences between the test environment and the actual service environment of the object;

2. The investment in testing is limited, and it is impossible to recall all problems without time and without considering the unlimited investment of resources;

3. People are limited by experience and energy blind spots, and making occasional mistakes is inevitable.

But for QAer, you can't give up as many recalls as possible for the problem.

From the perspective of the types of test recall questions, test recall is divided into explicit and implicit. Among them, the explicit one is called new function test. This is the hot spot of testing, and it is also the most basic function given to QAer by the business to ensure new functions. Correctness and delivery, but in fact, QAer should also see that with the proliferation of the system, especially the microservice architecture, the complexity of the system is gradually increasing, and the impact of new functions on old functions and surrounding businesses is inevitable. Testing has gradually become crucial, but because energy, ability, and focus are easily overlooked, and investing manpower in the vast collection of regressions will obviously be a means of relatively low ROI, so technology may be the solution to regression testing. the key to.

05 About test classification

5.1 Classification from the perspective of recall question types

From the perspective of specific types of recall problems, testing can be divided into performance testing, functional testing, security testing, interface testing, stability testing, UI testing, compatibility testing, etc. The methods are different, and you can better understand the key points of recall methods for different problems.

5.2 Classification from the perspective of test object hierarchy

From the perspective of recall problem levels, testing can be divided into white-box testing, unit testing, module testing, system-level testing, etc. The standard for this division is to divide the levels of test objects. For example, the objects of white boxes may be functions or classes. Module testing may be a service, and this division helps to guide reasonable questions to be recalled in reasonable modules.

5.3 Classification from a technical point of view

From the perspective of technical means, testing can be divided into precise testing, automated testing, stress testing, exploratory testing, fuzzing testing, traversal testing, online testing, and manual testing. This division is more driven by technology and reflects technology. In the test function, it does not focus on which classes of problems are found.

5.4 Classification from the perspective of recall responsibility

From the division of responsibility for recall issues, it can be divided into two categories: new function testing and regression testing.

Usually, the division of new functional testing and regression testing is relatively rare in the industry, so the following will focus on the definitions and benefits of these two types of divisions.

5.5 New function test

Definition: Verify that the implementation of the code meets the business requirements.

Suggestion: RD tries to ensure the correctness and rationality of the implementation first, so in theory, the verification and development of new function testing is the protagonist, and QA provides testing service guarantee.

Fields that can be studied by QA: test service (environment, single test), automatic generation of use cases, white box scanning, use case design and evaluation, etc.

5.6 Regression testing

Definition: Verify the impact of the upgraded code on the service or surrounding businesses

Suggestion: The regression test suggestion is the main responsibility of QA, so that RD can spend more energy on new functions and value creation.

New function testing and regression testing are consistent with the disassembly of test input, analysis and evaluation from the perspective of test methodology, but there is a focus on responsibilities. Because of this, the processing methods should be different:

1. As the basic guarantee, new functions should be carried out by RD, and RD has the responsibility to verify whether its functions meet expectations;

2. Regression testing should be the main responsibility of QA, so that RD can focus more on the development of new features.

Fields that can be studied by QA: manual integration regression; interface automation regression; business impact regression; joint debugging, etc.

The new function test and regression test can actually clearly distinguish the main focus of different roles in the recall problem, so that the roles can have their own positioning, and each perform their own duties and do a good job of division of labor.

06 Talk about recall costs

Definition: The total cost of controlling or recalling a loss of quality.

Basically include: unit time cost * occupation time

Why should the recall cost be mentioned:

1. Endless investment in quality control (does not mean not pursuing more recall issues), will result in waste of business iteration efficiency and resources, and the possible losses may be smaller than those caused by problems.

2. The cost of quality control and recall in different scopes, stages and roles will vary greatly. To pursue a recall with a high ROI, it is necessary to study how to adjust the cost of quality.

6.1 Cost Calculation

Single problem cost: problem recall cost + positioning cost + repair cost = recall time * unit recall resource cost + positioning time * unit positioning resource cost + repair time * unit repair cost

All question recall cost: is the sum of all single question costs

It can be seen that for the same problem, the unit labor cost corresponding to the positioning time and repair time (both are the same person) is the same for a smaller-scale recall than a large-scale recall, but the repair time and positioning time will be greatly shortened. Therefore, the cost of recall can also be reduced. In fact, this is the basic reason for the quality front.

In order to calculate the recall cost, QAer generally classifies the recall costs of different recall levels:

White-box-level recall: The cost of this type of recall is basically equal to the labor cost, and the repair and positioning time can basically be ignored.

Partial module-level recall: The cost of this type of recall is basically equal to the cost of manpower + automation, and the repair and positioning time is moderate.

System-level recall: The cost of this type of recall is basically equal to the cost of manpower + automation, and the repair and positioning time is relatively high.

It can be seen that the cost of recall varies greatly with different levels of recall, so it is necessary to map the problem level to a reasonable recall level.

6.2 Cost reduction method - reduce workload

To reduce the cost of recall, the first thing to do is to minimize the set of ineffective or repeated error-revealing behaviors, because there are two real premises:

1. Not all changes have quality problems;

2. Not all behaviors can be wronged.

This requires QAer to "see" before testing, which is the origin of the risk-based testing mentioned later, so QAer proposes intelligent construction: used for intelligent cutting automation tasks; quality model: used to judge whether manual intervention is required, etc. .

6.3 Cost reduction method - adjust the distribution

Secondly, it is hoped that the discovery of the same problem will cost less, including positioning, repair and discovery, so it is hoped that the reasonable distribution of the problem can be adjusted, such as recalling at the code level as much as possible, then in a single module, and then in an integrated recall; secondly, try to Use machines to recall (it is generally believed that the cost of people is higher than the cost of machines), so there are several understandings in the adjustment distribution:

Understanding of quality distribution adjustment 1. Adjustment means: try to use technology to adjust the distribution of recalls, so that the technical components of recalls can be improved;

Understanding of the distribution of tuning quality 2. Tuning stage: Reasonable problems should be recalled at the appropriate "stage". For example, new functions should be recalled as much as possible before testing, and complex cross-system problems should be recalled in the integration testing phase as much as possible;

The third is to adjust the quality distribution and adjust the level: the problem is recalled at a reasonable level, such as low-level logic problems, try to recall at the code level, and the functional problem is at the module level.

Let’s go back and talk about the quality front: we should find a way to let the problems at the corresponding level be found at this level, not necessarily system integration problems, which must be recalled in the development process, and it is not simply to put the testing tasks in the development stage , Submit phase.

Whether it is to understand one, two, or three, to realize it, one key word is inseparable from technology:

1. Understand that the literal meaning is technology, and use technology to adjust the distribution;

2. Understand that the prerequisite for automatic adjustment through stages is automation, because automation can be carried out anytime and anywhere;

3. Understanding the third is to use a more white-box-level method to recall, and it is also inseparable from technology.

6.4 Technology is an important carrier for adjusting problem distribution

PS: The technology mentioned here is not just automated execution

1. Automated execution can accumulate and transfer the recall experience contained in use cases to other roles, forming an interspersed recall of roles.

2. Automated execution can execute the recall experience anytime and anywhere, so tasks can be interspersed in various stages and roles.

3. Use the quality model to train the recall experience into a model, conduct risk prediction and make comprehensive judgments, and then intercept.

4. Having recalls at the code level, module level, and system level naturally requires more technical recalls, especially at the code level.

To sum up, the content of the test chapter mainly includes:

1. Introduce the concept of testing, starting from the concept of testing, the similarities and differences between quality and testing, so as to better carry out testing work;

2. Analyze the testing process, understand the key links and factors that affect test recall, and the key role of technology in each link;

3. The classification of tests, especially the classification of new and regression tests, better distinguish roles;

4. Consider the way of testing work and the necessity of technical recall from the perspective of recall cost.

Based on the introduction to testing, improving technology is very important in testing recall and testing efficiency, and it is also an inevitable choice for doing a good job in testing. The following chapters mainly talk about the understanding of testing technology.

07 Talk about testing technology

From the following chapters, we will start talking about testing technology, which is also deduced from the previous. Traditionally, when people talk about testing technology, the first reaction is generally automation, and sometimes automation is synonymous with the quality of testing technology, but through the above about quality, The essence of the test is dissected and broken down:

1. It is found that automation is not the case, and even traditional automation "essentially" cannot improve the test recall ability, and more can be adjusted by adjusting the distribution and replacing human execution, so it is only a means to improve efficiency

2. A very interesting phenomenon: just because the understanding of testing technology is equal to automation, there is a test platform, and even the test platform is the representative of QAer technology, resulting in less research on the improvement of test technology in recall ability, and thus the loss of QAer's housekeeping skills , so this needs to be corrected gradually.

In the following chapters, we will introduce that testing technology is not just automation (execution). It can be seen that in order to improve test recall, testing technology will be more promising.

7.1 The main responsibility of testing technology is high recall and efficiency improvement

As mentioned earlier, testing technology is not equal to automation, so the main responsibility of testing technology is not only to improve efficiency, but to first mention recall, and secondly, to automatically execute the entire process to replace human execution and adjust distribution to improve efficiency.

Traditional automation is shown in the figure below: the core is executing, which will rely on the prior knowledge of human testing, and implement it with machines through technology, which is mainly manifested in the execution level of test recall. But testing technology can actually do more in terms of assisting, mobilizing and simulating people, especially the support of chatGPT, which will greatly increase the possibility of this aspect.

picture

  • Function 1 of test technology: improve test recall effect

This is going back to the previous insight into the essence of testing. The core elements that affect test recall include test input, analysis, and evaluation. To improve the effect of test recall, testing technology should make more investment here. The following are respectively To give a few examples, we can better understand the investment in test recall in the corresponding fields:

1. Test input : Under the premise of accurate description of the system under test: carry out system simulation capacity building or directly conduct online testing activities, such as traffic simulation covering drainage, recording and playback; environment simulation: including vocabulary, configuration and upstream and downstream relationship etc. Then there is the construction of the test behavior set: such as the fuzzing ability of the interface, the filtering ability of the traffic, and so on.

The client-side test is specifically mentioned here, which may better understand the key role of the simulation capability and the construction of the behavior set in the test input: the characteristic of the client-side test is uncertainty, and there are two main reasons:

1. The container for APP application deployment is on the user's personal mobile phone, which is not controllable, standardized, and changes with the environment;

2. The behavior of the APP application is controlled by the user, which is highly random and unpredictable, while the server provides services through the interface, which is designed by the developer in advance and standardized.

These will bring:

1. There is a gap between the test cases of the tester and the real behavior of the user, and the tester can only ensure that the functions are basically available or correct;

2. The tester executes, and it is impossible to exhaustively enumerate the regression costs caused by non-standard operation behaviors and deployments;

3. If something goes wrong, there is no way to quickly roll back at the container level, but it can only disturb the user to replace the APP;

Therefore, client-side testing needs to consider the extraction of models (environment), design and selection of use case sets (test activity sets) more deeply, and there will be a lot of room for technology here.

2. Test analysis : As mentioned earlier, test analysis is the behavior of the object under input conditions, and then discovers potential problems. This involves analyzing the manifestation of the problem; for example, what is the manifestation of the service core; memory leaks The expression form of dirty data can be based on this, whether the object exists, whether the characteristics exist, whether the logic is correct, whether the function is good or not, and the judgment and analysis, especially the judgment of whether it is good or not, requires a strategy based on statistics and probability. Do analysis and judgment. Without analysis, there is no recall. For example, if there is no checkpoint for a single test, then the single test is useless.

In fact, all of these can be summarized as follows: by obtaining more system performance data + strategies to make a judgment on the problem, this is test analysis. It’s just that some judgments are explicit, some are invisible, some just look at a single indicator, and some need clustering, so this requires QAer to do more research.

3. Test evaluation : A link that was easily overlooked in the past. It is believed that the test execution is completed. If the report is ok, everything will be fine, but as everyone knows, the problem is undercurrent. Relevant technologies such as feature mining of existing changes, mining of change impact surfaces, test behavior data collection (coverage rate), and quality model for final decision-making have emerged, and even decision-making and subsequent recall behaviors have been generated through test evaluation.

The above technical investment is conducive to improving test recall and requires continuous investment, because the objects are constantly changing, which is also the driving force for QAer to continue to innovate and change.

The above perspective is based on technical investment and analysis from the three key factors that affect test recall. Next, from another perspective, we will analyze the irreplaceable role of testing technology in test recall.

Premise 1: Recalling by "people" is an experience-driven and problem-driven test. People have limitations in vision, responsibilities, experience, and energy, which will lead to test leakage;

Premise 2: What people can "see" is often the surface manifestation of the system. There may be problems inside the real system, but what has not happened is unobservable.

Based on these two points, technology can help make up for the lack of appeals, so corresponding recall technologies can also be developed from these two points, such as active recall technologies: smart UT, AISA (intelligent code defect detection), online risk detection (online Real-time detection of system risks, such as single-point deployment risks, unreasonable overtime configuration risks, etc.).

From the level of the test recall's dependence on "people", the impact of technology on recall is generally divided into:

1. Recall technologies that do not depend on "QAer" at all or are beyond human responsibilities: such as intelligent UT, AISA, traversal, quality model, full use case generation, etc.;

2. Assist people to improve test recall: coverage evaluation, auxiliary generation of use cases, etc.

Necessity analysis of online testing

Offline testing has two natural disadvantages:

1. The simulation of the object's operating environment (such as topology, data, and state) cannot be 100%, and problems in some extreme cases just rely on this simulation capability;

2. Uncertainty brought about by many intervention operations cannot be exhausted and fitted by the case structure of offline tests, such as the particularity of online traffic, PM operations, expansion and contraction, etc. These operations are lacking offline , often lead to certain problems.

Based on this, under certain conditions, targeted tests can be carried out on the online system to improve the recall capability of the system. In the past, there were problems such as safety and impact in online tests, and there were relatively few related practices. However, with the microservice cloud The rise of native technology provides basic technical support for online testing, so online testing has become possible, especially for major events or high-traffic scenarios.

  • The second function of testing technology: to realize the implementation without relying on "people" - that is, to improve the efficiency of automation

As mentioned earlier, automation—that is, automated execution in a narrow sense will not improve recall, so why is automation still mentioned? The following disassembly needs to be done here:

1. The research on test input, test analysis and test evaluation technology is considered from the perspective of recall, but if these processes can be automated and connected in series, the efficiency will be improved, so it is necessary to study automation;

2. It is very labor-intensive if the test execution is all manual. Therefore, from the perspective of test recall cost, it is necessary to study automation.

Chat about automation

The essence of automation: transform the collective wisdom of members into the interests of the entire team, and use the machine to run the testing behavior.

The whole process includes the whole of the test: including test input, analysis, execution, positioning and evaluation. Automation is the technical carrier for these test recall technologies to run. The former comes first, and the convergence is automation.

From the perspective of the above definition, automation has the following characteristics:

1. Automation can be executed anytime and anywhere without relying on human energy, which provides a prerequisite for quality pre-production and quality distribution.

2. Automation, which can replace human beings with machines, can improve human efficiency, and then allow people to have more energy for creative work.

3. Automation is also very critical, that is, it can inherit and accumulate the wisdom of the team from the establishment, so as to avoid the experience from only one person. From this perspective, more problems can actually be recalled, but the recall ability still depends on The continuous accumulation and input of people. ** Therefore, we must do a good job in the precipitation of automation, which is a continuous manifestation of the collective wisdom of the team. **This is critical for system regression.

4. A large number of regressions can increase confidence.

Automation is split from the perspective of recall question type:

1. New function automation: It can provide support in assisting in writing use cases, environment automation, etc., and perform automatic execution;

2. Return to automation: the automation of the impact of the new on the old functions and business, usually talked more about the return of automation.

New Feature Automation

The new functions need to be fully automated, which is still very difficult at present. It is necessary to automatically generate and execute test cases according to the function points, but it is still possible to do assistance.

1. Auxiliary use case generation

2. Auxiliary environment generation, test data generation, etc.

This focus can first focus on how to provide testing services, rather than fully automated or single-time availability research with multiple playbacks in one row.

Importance of regression testing

Reason 1: New requirements for business development : The current testing hotspot is new functions. The impact of new functions on historical debt and external business is difficult to assess by people, and from the perspective of leakage, such problems are indeed increasing. Continuous accumulation of automated regressions, so no reliance on human judgment

Reason 2 New challenges in organizational behavior : personnel changes include resignation, job transfer, internal job rotation, pooling, etc. The testing ability based on business experience will be challenged, and experience needs to be continuously accumulated

Reason 3: The investment in recall technology has been limited in recent years : From the perspective of the three core factors that affect recall capabilities, test input, test analysis and evaluation, there are currently more evaluations, but the technical investment in input and analysis is obviously insufficient compared to before , such as environment simulation, configuration simulation, flow, etc.

Reason 4: Business development requirements need to be enhanced : under the background of cost reduction and efficiency increase, the proportion of automated recalls, especially regressions, needs to be increased (hotspot testing, regressions actually require huge investment)

Reason 5: The level of automation regression reflects the level of testing technology : truly powerful automation requires the blessing of AI in terms of effectiveness, recall, and ROI, which can improve quality confidence and reduce costs and increase efficiency.

Restricted by the automatic execution of testing technology: regression testing actually includes manual regression and automated regression. The importance of automated regression is also mentioned above, so manual regression is also the same. In fact, the essence is the same, nothing more than the executor is a machine It is still the difference between people, and in essence, it is also necessary to do a good job of precipitation, characterization, evaluation, etc. Precipitation is very important. Therefore, manual regression testing also needs to explore technical empowerment, such as use case recommendation, online support, and quasi-exit evaluation.

Technology is the inevitable path to achieve high ROI regression testing

The goal of the regression test is to recall the problems caused by changes to the historical functions of the business and surrounding business functions through testing activities. If the regression testing is performed manually, the person must first set up the environment and run all the existing business functions and surrounding business related The use case of the function, and then observe the performance of the system, which will bring several questions:

1. With the continuous iteration of the system, the functional logic of the system itself becomes very complex, so the number of use cases will definitely increase

2. With the development of business, the interaction between systems will become frequent and complex. Testers need to evaluate the impact on related business and execute the application cases, and such scenarios will become more and more

3. Usually, the number of recall problems in regression testing is much smaller than that of new function problems, and sometimes it does not necessarily affect old functions, which leads to a particularly low ROI if all regression tests are put in for regression.

The above reasons are combined to show that:

1. Regression testing needs to be carried out in a high ROI way, and it is not feasible to rely on human execution;

2. Regression testing relies on people to evaluate the impact. There are differences in people, which can easily lead to leaks. It needs to be precipitated by technology to balance the differences in people.

Therefore, only technology can solve regression testing, and it is possible, because regression is that testers have written use cases, and the core is the problem of automated execution. If you evaluate the impact, select use cases, and automate execution, these are closely related to technology. .

To sum up the summary of automation: experience is accumulated through automation, so that experience can be inherited through technology without difference, thereby eliminating the dependence on people, allowing new products to quickly reach a high level, and the recall ability of people has always been at this high level. Constant iterations have led to an ever-increasing recall capability.

  • Talk about the risk-based testing technology Baidu has been doing Risk-based Testing Technology

Definition: According to potential risk analysis, decision-making test input and analysis behavior, to uncover errors in a way with high ROI.

From the perspective of theoretical derivation, first of all, the test recall is to consider the recall cost, and secondly, it is based on human experience testing. In fact, there are blind spots, so it is necessary to use machines to help make risk judgments.

From the perspective of benefits, why risk-based Testing Technology:

1. Prevent waste of resources and make decisions about testing behavior based on risk, such as running or not running, and how hard to run;

2. Make up for the lack of recall, do quality risk assessment based on risk, recall more potential problems, etc.;

3. The pursuit of efficient execution, based on risk control test duration, test frequency, etc., real-time test guidance for testers to accurately judge the end of the test behavior.

Theoretically, if the risk model of risk-based decision-making is effectively and continuously precipitated, its recall ability will not be lower than the basic level of human beings, and may even be higher in some scenarios. Second, it can achieve a higher ROI.

The difference between risk-based testing and precise testing:

1. Data source: Accurate testing focuses on coverage, risk-based recommendation: coverage is only one of the data sources;

2. In terms of strategy: the risk recommendation includes strategies such as models, and the precise test generally does not include;

3. Impact on testing: Accurate testing focuses on "selection" and "seeing"; risk-based testing technology focuses on decision-making, such as whether to implement the decision-making behavior, to what extent the behavior is executed, and to what extent simulation and decision-making are required Whether to supplement the test;

4. Ability: Accurate testing also includes positioning, and risk-based testing technology does not include positioning capabilities.

7.2 Talk about TestGPT

In the last link, based on the recently very popular chatGPT, Code-GPT and many other fields derived from it, some relatively simple thinking has been done, and the follow-up will continue to strengthen this precipitation and improve this understanding.

significance

chatGPT has indeed brought many changes to human society. There are actually two core reasons for its successful understanding:

1. The expression of natural language further lowers the threshold for ordinary people to enter the game, so that the public can participate;

2. Generative output makes artificial intelligence have vitality and conclusive output instead of static and information display, which stimulates interest and can really help human beings.

Technically there are two key factors:

1. A large amount of structured data and closed-loop feedback form a positive cycle;

2. Technological breakthrough of large models.

Inspiration for software testing:

1. Things as complex as big models and big data can have good results. If you vertically go to the field of software testing, it is possible to do corresponding code-based big data model training and application-based model fine-tuning. The block boosts a certain confidence.

2. Software testing, as mentioned before, is divided into new function and regression testing. New function testing, if code generation and recommendation can be done, it basically means that the machine can already understand the code, so it is not a problem to generate use cases according to certain standards Next; the second is regression testing. Regression testing is essentially based on system description and continuous accumulation of team wisdom to make decisions and uncover errors. These are actually data precipitation + large models in the software field, which can basically form the best regression plan. .

Therefore, TestGPT, called TestGPT for the time being, is possible and has certain confidence.

Feasibility Analysis

TestGPT, if these two assumptions hold, can be executed:

1. It has a very powerful automation system and internal data structuring capabilities. If manual online data collection is added, it will become more complete.

2. With the investment and infrastructure for risk-based testing in recent years, there is already a shadow of decision-making. It is necessary to summarize the differences between the corresponding implementation technology, Wenxin Yiyan and chatGPT, and stand on the giant’s footing Shoulders, do a good job of fine-tuning the model and structuring the data.

3. It is necessary to emancipate the mind. QA cannot rely on the competitiveness of "experience" to survive, but rely on "creativity" to survive, so that humans can drive the machine, otherwise it will be eliminated by AI sooner or later.

Specific scenarios - the third phase of Baidu Smart Test

Regarding how to do it, here are more thoughts from the directions that can be invested:

1. Code is the forefront of quality problems. Code-level quality technology includes: including code understanding, code insertion, single test case code generation, static defect intelligent identification, dynamic defect intelligent identification, risk degree prediction, defect intelligent positioning, and natural language use case generation based on code understanding. These prerequisites It is all done on the basis of fully understanding the code logic. I have invested in it since 2019. I hope to seize the opportunity to study in these fields, which should produce unexpected results, and then improve development and code quality.

2. Train TestGPT for each module, system, and business:

  • TestGPT will have the following functions: Perception: Perceive various changes related to the object and possible impacts; Decision-making: Based on perception combined with the existing recall ability to decide on the error-discovering behavior set, the decision-making specific test behavior results may be: Reasonable distribution of existing behavior sets or new generation of behavior sets or both; Response: Execution of existing behavior sets or real-time generation of behavior sets and execution; Evaluation: Exit risk assessment combined with changes + error-discovering behavior sets, and finally Make necessary test supplements.

  • The main work of QAer in the future is to tune services and train risk models (TestGPT, QAer is responsible for training large models of corresponding modules, services and businesses, and performing test generation, decision-making (AI) and execution (automation)), weakening experience-based pipelines, Test task concept. In terms of work form: Sensing input changes and demand information, TestGPT outputs test plans, use case sets, and execution results, and the final evaluation is accurate. Of course, it is also possible to go through a fully automated process that does not require personnel to execute, but test plans, use case sets, etc. will still accumulate. down. Based on models and data: The daily work of testers based on large models of business systems includes model creation, training, tuning, and use.

3. As the knowledge accumulation and accumulation of a certain team, TestGPT no longer has knowledge bases, documents, etc., and becomes the working secretary of each QAer.

To sum up, the content of the test technology chapter mainly includes:

1. Testing technology is not limited to test automation. In fact, in terms of recall, testing technology should have a broad space;

2. As the business continues to iterate, the software complexity and dependencies continue to increase, and regression testing, especially automated regression testing, has become particularly important;

3. In the LLM era, the feasibility analysis of TestGPT is believed to have a lot of room;

4. With the double support of cost reduction and efficiency increase and LLM, testing technology will have great potential.

08 It is very important, you can read the following sentences

1. Strengthen the research of quality technology, quality technology is far from the current breadth and depth

Architecture robustness, service flashback, software sustainability, test recall technology, test automation technology, code detection technology, monitoring recall technology and other fields are far from ideal, and practitioners need to think, research and practical solutions.

2. Strengthen the research on the measured object

The basis of everything is to know yourself and the enemy before you can take the initiative, research well and describe the test object. Understanding the test object is very important.

3. Strengthen the research of testing technology, testing technology is far from the current breadth and depth

Testing technology is not equal to automated execution. If you want to improve test recall, test input, analysis, and evaluation, you can do a lot technically.

4. Pay attention to regression testing and strengthen the continuous precipitation of experience

With the increase of business complexity, the accumulation and intersection of modules and cross-businesses will become responsible and uncontrollable. It is necessary to use continuous wisdom to carry out accurate regression and recall of mutual influence, not just functional verification tests for "new requirements". ".

5. Increase investment in active recall technology to make up for human blind spots

Restricted by objective conditions, people have their own blind spots such as experience and energy. Recalling with technology that does not rely on people is the key to checking for leaks and filling vacancies.

6. Technology is an important carrier to adjust the distribution of recall costs

  • Automated execution can accumulate and transfer the recall experience contained in use cases to other roles, forming an interspersed recall of roles.

  • Automated execution can execute the recall experience anytime and anywhere, so tasks can be interspersed in various stages and roles.

7. Believe in TestGPT, QAer should survive by "creation" rather than "experience"

So-and-so is very familiar with this business and cannot be used as a reason for personal competitiveness. Experience will be replaced by AI or other people.

09 Summary

Quality and testing are not just "dots and dots", and many quality problems cannot be completely solved by "dots and dots". If you need to use technical ideas to solve problems, you will find that there are many technologies worth studying, not just in the field of software engineering, similar In human society, they are eternal and worthy of research topics, such as precise prevention and control in previous years. Strengthen thinking and embrace changes with the mentality that the future has come.

——END——

Recommended reading :

Baidu APP iOS terminal package size 50M optimization practice (1) overview

Web-side video frame interception scheme based on FFmpeg and Wasm

Baidu's R&D efficiency transformation from measurement to digitalization

Baidu content understanding reasoning service FaaS combat - Punica system

Exploration and Practice of Accurate Water Level in Flow-batch Integrated Data Warehouse

Text Template Technical Solution in Video Editing Scenario

{{o.name}}
{{m.name}}

Guess you like

Origin my.oschina.net/u/4939618/blog/8657027