Comprehensive Software Testing

1 Diagram of software testing in the whole process

In traditional software testing, after the developer completes the task, it is finally delivered to the tester. In this mode, the tester cannot find the defects in the requirements stage early, and the development of the test work is also lagging behind. The product quality cannot be effectively controlled by the process. and analysis, the overall schedule may be delayed due to rework issues.

What is full-process software testing, can also be said to be comprehensive software testing, as shown in the following figure:

 Throughout the SDLC, there are three character lines and four stages.

There are three main lines of roles: development, QA, and testing. This article mainly explains testing.

Four phases: requirements, development, release, and daily operations.

Simply put, it can be summarized as shown in the following figure:

 Testers run through these four stages and carry out testing activities. A brief description of the practical activities is shown in the figure below:

 Each stage also has activities corresponding to developers and activities corresponding to QA personnel.

For products, each version iteration will go through: requirements, development, release, and finally push to daily operation. The demand stage and daily operation stage pointed by the dotted line in the release stage are not a termination stage, but a process of continuous iteration.

How does the tester carry out the whole process of software testing activities?

stage

Developer

Testers

QA staff

demand stage

· User story analysis

· User Story Timing

· Participate in user story analysis, mining story ambiguity

· Refer to the experience library to question the development time estimate

· Ensure that requirements validation activities conform to the requirements management process

· Manage user story reviews

· Manage requirements changes

Key practices as a tester are as follows:

Participate in user story analysis and mine story ambiguity

In the sprint meeting, analyze the user story to check whether the functional requirements and non-functional requirements are clearly described, and the non-functional requirements can be used as the acceptance point, such as a user story:

"Customers Want Improved Response Times"

The tester should assist the developer in disambiguating the story: improve the response time of what and by what response time? It can be suggested to change to:

"The response time for the results returned by ordinary inquiries about customer information is within 5s"

It shows that in the "customer information" module, the operation of "common query" is performed, and the time to return the result is within 5s. This statement sentence has been clearly expressed, and the effect of eliminating ambiguity has been achieved. Likewise, testers can write user stories that improve query efficiency:

"Customers can return results within 5 seconds when performing common inquiries in the information inquiry module"

"Remarks: 5s is a non-functional requirement, and it is also a key point of acceptance"

Reference the experience library to question the time estimate for development

In the sprint meeting, developers play cards based on experience (rules defined by the team themselves, using playing cards) to estimate the time, and when the final result is given, the tester should question it. Testers draw on the historical experience library: what is the skill of the developer in a certain aspect, what degree of defects has occurred in this module, how much time is spent on repairing defects, etc., comprehensively consider, ask questions, let the development estimate the final time, Take these factors into account as much as possible. Of course, one of the prerequisites that testers can question is that testers have relevant development experience.

Summary: In the requirements phase, testers should play a role to reduce ambiguous requirements into the development phase, and at the same time assist development in time estimation.

3 Development stage testing

In the development phase, the main tasks of developers, testers, and QA personnel are shown in the following table:

stage

Developer

Testers

QA staff

demand stage

· User story analysis

· User Story Timing

· Participate in user story analysis, mining story ambiguity

· Refer to the experience library to question the development time estimate

· Ensure that requirements validation activities conform to the requirements management process

· Manage user story reviews

· Manage requirements changes

Key practices as a tester are as follows:

Confirmation of function points

Xmind is a very useful mind-mapping tool. Usually, before the developer codes, the tester will confirm with the developer about the user stories processed by the requirements, correct the understanding deviation, and ensure that the understanding of the requirements is consistent.

Figure-5-Brain map use case template

test case design

Testers mainly design test story points and use DSL (Domain Specific language) to describe test cases, including three basic elements:

Feature, Scenario, Example, supplementary elements: xmind, Requirement.

Feature: Classify the test into a certain module, and describe the business purpose of the feature itself, bring in the business goal, and transfer business knowledge.

Scenario: Mark the test scenario of this Feature, you can use text to describe the steps, or use xmind brain map

To describe, the data in the scene uses the ones listed in Examples.

Example: Lead out a specific data table to display all the data used, avoiding redundancy caused by repeating the same steps several times due to changes in test data.

Xmind: brain map file, showing test story points

Requirement: The requirement id associated with the requirements management system.

As agile becomes more and more well-known, agile testing has also received more attention from everyone. Here, I would like to talk about an automated testing-related problem I encountered in an agile project and how we can solve it with the help of a DSL domain-specific language.

Anyone who has a certain understanding of agile software development methods knows that the agile software development process is an iterative delivery process. Each iteration corresponds to a smaller lead time. Then, in order to cooperate with frequent software delivery, agile testing must be adjusted accordingly compared with traditional testing. This also leads to several unique challenges for testing in agile projects:

  1. Frequent regression testing to ensure each iteration is deliverable
  2. Involve the entire development team in testing activities to shorten the feedback cycle for quality information
  3. Involve customers in testing activities to help improve testing effectiveness

Automated testing plays a key role in meeting the challenge of frequent regression testing. Do a bad job of automated testing and the team will eventually be overwhelmed by the increased regression testing workload with each iteration.

In a team I have experienced, everyone in this team realized the importance of automated testing early on, and spared no effort in investing in automated testing. We believe that when automated functional testing is added enough, it can guide manual regression testing and ensure a smooth delivery process.

Indeed, when automated testing first started, we benefited a lot. For every automated test we add, we eliminate some manual tests. Automated testing allows us to have more time to manually test those functional points that have not yet been automated and are difficult to be automated, and we can also have time and energy to do exploratory testing. This result makes the team feel that life is good, and it also makes us believe in automated testing

However, the good times don't last long. With the continuous increase of automated testing, we will face some problems:

  1. Automated testing revolves around implementation details. As volume grows, it's easy for the outline of a business to get lost in the details.
  2. Tracking of tests is lost at the functional level. Because testers cannot specifically know which test cases are covered by automated tests. Every time they regress, the team needs to regress the entire test suite.

As a result, our manual testing is getting more and more difficult to get help from automated testing. It started as a tasteless project. Test code is hard to read, hard to maintain, and hard to look at with test results. This directly leads to us not only investing considerable time in increasing automated testing, but also investing a lot of time in reading and using the test results.

So we began to re-examine the practice of automated testing and continue to explore better ways.

Very quickly, we discovered that "being able to run" is not the only characteristic that a good automated test needs. Let's take a look at what's going on with a piece of test code.

selenium.open(“/”)
selenium.type(“id=username”, “myname”)
selenium.type(“id=password”, “mypassword”)
selenium.click(“id=btnLogin”)
selenium.waitForPageToLoad(30000)
assertTrue(selenium.isTextPresent(“Welcome to our website!”))

In this test, we first open a page, look for an input box with an id of username, enter "myname", then look for an input box with an id of password, enter "password", and click on an input box with an id of btnLogin button, after waiting 30 seconds, assert the text that should appear on the page.

We can see that the implementation of this test completely describes the operation process of the test, which is a step-oriented description rather than a purpose-oriented description. Of course, with a little analysis, we can also see that the purpose of this test is to test the user's successful login system.

However, imagine that when we have many such step-oriented tests, it is not so intuitive to extract the test intent that is overwhelmed by countless detailed operation steps and use the test results. Moreover, if an error occurs in the test, it is not so easy to locate the specific function point of the problem.

At the same time, not all members of the team have the ability to read and write such tests. This undoubtedly reduces the participation of team members in automated testing. For customers, automated testing is even more of a black box. What is done and what is not done is basically unclear, let alone participating in automated testing to help improve the effectiveness of testing.

All kinds of situations, the reason is that the test readability is too poor, and the test intention is not obvious enough. Tests that are runnable and easy to read are good automated tests. Only in this way can we ensure that we will not lose the tracking and management of test cases at any time. Testers can quickly read the test at any time to understand which functions have been covered by automated tests, and effectively plan the workload of manual testing.

How to improve the readability of the test?

Our solution is the DSL Domain Specific Language.

What is a Domain Specific Language? There is a more detailed description in Uncle Martin's blog . Roughly speaking, a domain-specific language is a specific-purpose programming language for a domain. Unlike general-purpose languages ​​such as Java and C#, it can solve problems in any field. A domain-specific language uses its own unique grammatical structure to describe services that are closer to professional domain languages.

Making the description of the test close to the domain language of the system under test and making the test intention clearly expressed is what we want. DSL just can help us achieve.

Let's look at the previous code again:

selenium.open(“/”)
selenium.type(“id=username”, “myname”)
selenium.type(“id=password”, “mypassword”)
selenium.click(“id=btnLogin”)
selenium.waitForPageToLoad(30000)
assertTrue(selenium.isTextPresent(“Welcome to our website!”))

Due to the use of a common language, it is too detailed and procedural in our specific usage scenario, and the test intention cannot be clearly expressed.

Switching to DSL, our test can be directly described in the language of acceptance criteria as follows:

Given I am on login page
When I provide username and password
Then I can enter the system

In this way, the content of the test is much more intuitive, and it also includes some business information, letting us know that this is testing a login scenario, rather than arbitrary input information, taking into account the responsibility of transferring business knowledge. As for the code that can run behind these DSLs, it is also hidden. If someone who cannot read the original test code (whether it is a requirement analyst or a customer or even some testers who pay less attention to the automation code) wants to join the automated testing activities for feedback, they will not be rejected by the people behind the DSL. Influenced by the "noise" brought by the code.

Of course, in our actual application scenario, this requirement is not so simple. Our acceptance criteria will also consider different data such as entering different combinations of username and password:

Given I am on login page
When I provide ‘david’ and ‘davidpassword’
Then I can enter the system
Given I am on login page
When I provide ‘kate’ and ‘kate_p@ssword’
Then I can enter the system

and more test data.

So in this case, it is not enough to just use popular language, after all, the number of tests is there. If the number of tests cannot be reduced, it will still be cumbersome to maintain. For example, if the implementation of the system becomes to enter the user name, password and a random verification code every time, we need to modify many places in our automated tests, which is cumbersome. Therefore, we need to improve its abstraction level a little more in the test of natural language description with better readability.

Fortunately, the DSL tool we chose at that time was cucumber. In addition to providing several test description levels: Feature, Scenario, Steps, it also provided a very good organization method-data table.

In this way, our automated test can separate the previous login function according to the characteristics, scenario summary and specific steps, and clearly layer it. At the same time, using the data table, our test can be simplified into a series of repeated but input The operation process of data change is as follows:

Feature: authentication
In order to have personalized information
I want to access my account by providing authentication information
So that the system can know who I am
Scenario Outline: login successfully
Given I am on login page
When I provide ‘<username>’ and ‘<password>’
Then I can enter the system
Examples:
|username |password |
|david |david pass |
|kate |kate_p@ssword|

It looks more refreshing after testing it. First, using the Feature keyword, we classify the test under the big feature of login, and describe the business purpose of this feature itself, bring in the business goal, and transfer business knowledge; then use the Scenario keyword to improve the key performance Indicate that what we are doing in this test scenario is to test the successful login, and write out the steps; finally, we use the Examples keyword to lead to a specific data table to display all the data used, avoiding our same steps because of the test Data changes are repeated several times, resulting in redundancy. In case there is a change in requirements and requires the user name, password and verification code to be provided at the same time, then our test only needs to be changed less.

What's even better is that with this data sheet method, the collaboration efficiency of the entire team has been improved. For testers who are not so smooth in writing code, adding automated testing means adding more test data, which can be filled into the data table.

In this way, we have implemented executable and highly readable documentation with DSL. It helps regression testing, reduces the difficulty of document maintenance, and promotes the enthusiasm of team members to use testing to transfer knowledge, so that more people can participate in testing.

use case review

The main thing is to adhere to the principle of peer review, which is mainly carried out in the test group, and the developers responsible for the task will also participate. Simply put, it is the work of checking for leaks and filling in the test cases.

test exploration

After the "Function Key Confirmation" and "Use Case Review", in order to ensure the coverage of the test scenarios, further test exploration is required. After the developer completes the prototype, use the strategy of exploratory testing to conduct a purposeful quick walkthrough of the basic process of the function, dig out the places where the function is uncertain and supplement the test scenarios, and avoid uncertain factors from delaying to the later stage of the development stage, resulting in rework .

Among them: functional testing, bug tracking, regression testing, system testing, and acceptance testing are all necessary links for daily testing work.

Release of Burndown Chart

In addition, testers also have an important job to release burndown charts daily to let the team understand the current progress and summarize problems

Where, seek solutions to tasks that take longer than expected.

 Figure-6- Burndown Chart

Graphical Features:

1) The remaining working hours are above the planned benchmark, which means that the progress has been delayed, and the progress should be paid close attention to;

When such problems are found, it is necessary to analyze and summarize. The principle is to ensure the delivery time, adjust the corresponding tasks, embrace changes, find that the task granularity is too large, and continue to split if it should be split; you need to be cautious about refactoring, and don’t refactor too deeply. It brings extra workload to the test and affects the entire progress. For the entire version, only when the development and testing complete the task within the promised time is it truly completed. Only the completion of development and delivery is not considered a success.

2) The remaining working hours are close to the planned benchmark, which means that the progress is good and should continue to be maintained;

At this time, it is also necessary to check whether the tasks with high priority are guaranteed time under this progress, instead of making the burndown chart look good because simple tasks are processed. There are often some developers who like to pick tasks to do, and complete the easy-to-do and priority tasks first, because these can always be completed within expectations, so the trend of the early burndown chart seems to be no problem.

Defect experience library

Every team has development/testing newcomers and development/testing seniors. When testers and new developers confirm requirements, they also need to be reminded of defects and lessons learned to avoid detours.

 Improve the quality of development self-test

Testers can provide relevant checklists (you can modify them to suit the team according to the original author's) to help developers focus on the key points of developing self-tests during the coding process, thereby improving the quality.

 Figure-8-web software test checklist

continuous integration

Use the continuous integration (Jenkins) platform to quickly build and develop code and automate unit testing to improve the efficiency and quality of code development.

Developers responsible for unit testing will receive emails of failed builds;

Developers responsible for integration testing will receive emails of failed builds;

The test manager in charge of automated testing (Selenium) will receive emails of failed builds;

In this way, it is ensured that unit tests, integration tests, and automated tests are concerned and maintained by relevant personnel.

Figure-9- Continuous Integration

Sonar Feedback

Sonar is an open platform to manage code quality. As such, it covers the 7 axes of code quality。

 sonar analysis results

The main feedback issues of testers are as follows:

Code coverage: The team requires a code coverage rate of over 80%;

Test success: The team requires a test success rate of 100%;

Duplications: The team requires the code duplication rate to be below 10%;

Violations: The team requires that the code rules of the Major category have less than 20 defects;

The development team must ensure the quality goals of each environment in order to be able to guarantee the overall quality goals.

summary:

The relationship between testers and developers is never an adversary, but an assisting relationship. To be precise, they are the two sides of the quality scale. If the work on either side is not done well, the balance will be lost.

4 Release stage testing

In the release phase, the main tasks of developers, testers, and QA personnel are shown in the following table:

stage

Developer

Testers

QA staff

release stage

· Online application

· Online deployment

· Service Monitoring

· testing report

· Online function check

· Management review activities

· Manage documentation artifacts

Key practices as a tester are as follows:

testing report

Complete the acceptance test, provide a test report, and give test data metrics, such as:

  • The total number of defects found in the test: the number of defects whose removal status is "invalid" or "do not need to be corrected" generated during the test.
  • The number of serious defects found in the test: the total number of defects with the status of "invalid", "no change" and severity of "Major" and "Critical" generated during the test and removed.
  • The number of defect repairs found in the test: the number of defects whose status is "closed" generated during the test;
  • Number of unresolved defects: the total number of defects whose statuses are "invalid", "do not need to be changed", and "closed".
  • Defect repair rate: (Number of repairs found by testing) ÷ (Total number of defects found by testing) × 100%
  • Serious defect rate: (Number of serious defects found in tests) ÷ (Total number of defects found in tests) × 100%
  • Critical defect repair rate: (Number of critical defects fixed) ÷ (Number of critical defects found in tests) × 100%
  • Test requirement coverage: the number of tested requirements ÷ the total number of requirements × 100%

Defect Statistical Analysis Report

In addition, testers also have an important job to perform statistical analysis on the defects of the current version:

Statistics by defect level:

Critical

Major

Medium

Minor

total

front page

0

0

1

0

1

module one

0

0

0

2

2

module two

0

1

2

10

13

module three

0

0

1

4

5

Module Four

0

0

1

2

3

Module five

0

0

3

2

5

Module six

0

1

0

1

2

Module Seven

0

2

0

6

8

sonar

0

1

2

0

3

total

0

5

10

27

 Figure-11- Defect Statistics

Statistics by defect source:

development 1

development 2

development 3

development 4

development 5

Legacy

Critical

0

0

0

0

0

0

Major

1

2

0

0

0

2

Medium

1

7

0

1

0

1

Minor

1

7

4

6

3

6

total

3

16

4

7

3

9

Statistics by defect status:

total number of defects

Number of defects closed

Legacy

defect repair rate

number of serious defects

critical defect rate

number of critical bugs closed

critical bug fix rate

42

40

2

95%

5

12%

5

100%

Test progress and problem analysis:

1. From the perspective of the severity level distribution of BUG, ​​BUGs above the Major level account for 12%, and the proportion is not high, indicating that most of the main functions have been realized;

2. Among them, the defects at the sonar definition level are mainly concentrated in the code specification and unit test coverage, indicating that the code quality needs to be improved;

3. The early stage of version testing is relatively sufficient, and in the later stage, as the number of function points and the number of BUGs increase as the development and submission are completed, the remaining testing time becomes tense;

4. During the version test, it was found that the test environment once had the code covered, and the test execution was affected twice due to the developer's operation error;

summary:

Testers should continue to give feedback, improve, and summarize the problems that occur in each version (whether it is a defect or in the process), analyze the defects, and summarize some rules to help developers establish good habits and improve the quality of the code .

5 Tests in the daily operation stage

In the daily operation stage, the main tasks of developers, testers, and QA personnel are shown in the following table:

stage

Developer

Testers

QA staff

daily operation

Production fault registration

· Version problem feedback and improvement suggestions

· Production failure analysis

Manage day-to-day operations

The daily operation stage is not the termination stage. Even if activities are suspended in the demand, development, and release stages, as long as the product provides services, daily operations still exist.

Key practices as a tester are as follows:

Version problem feedback and improvement proposals

Summarize feedback on problems that occur in daily operations, put forward suggestions for improvement, and track implementation.

Production failure analysis

Assist development and troubleshoot production failures to avoid missing test scenarios.

human Resources

软件测试并不是保证产品质量的最后一道防线,测试人员也不是,测试人员的工作完全可以由更加资深的开发人员来完成,不过现实总是残酷的,目前测试与开发的比例为:1:3,在成熟的团队是这样子,另外一些还在持续改进的团队,由于资源不足,可能去到1:7。开发人员在相当长的一段时间内不可能完全替代测试人员,有个关键要素:思维方式不同,有句古话来形容:江山易改本性难移。当开发人员的思维方式改变的时候,那就成为测试人员了,倒不如把测试人员独立出来更好,并且培养给开发人员一定的测试素养,这个对保证产品质量都是有帮助的。

全程软件测试实践,强调的是贯穿每个阶段的测试活动,不论是开发、还是测试,要理解双方的活动价值,什么时候该做什么事情,什么事情该做到什么程度才算好,保证每个环节的质量,才能够保证产品的全程质量,另外产品质量不是测试出来的,而是构建过程中沉淀下来的,开发人员的素养、测试人员的素养、以及团队对开发测试过程的重视程度,决定了产品质量。产品质量就如同一块蛋糕,应当切分为小块,落实到每个人手里,让每个人尝到甜头,担当起来。

TQM(全面质量管理) in Software

这是一个延伸与关联,过程如下:

 TQM是以产品质量为核心,建立起一套科学严密高效的质量体系,以提供满足用户需要的产品的全部活动.

在软件业,软件质量得不到提高主要原因在于质量观念的缺乏,而将全面质量管理的思想运用于软件业,是提高软件产品质量、获取竞争优势的有效手段。CMM不但对于指导过程改进是一项很好的工具,而且把全面质量管理概念应用到软件上,实现从需求管理到项目计划、项目控制、软件获取、质量保证、配置管理的软件过程全面质量管理。CMM的思想是一切从顾客需求出发,从全组织层面上实施过程质量管理,正符合了TQM的基本原则。因此,它的意义不仅仅是对软件开发的过程进程控制,最关键的它还是一种高效的管理方法,有助于企业最大程度的降低成本,提高质量和用户满意度。

软件质量管理体现TQM的运行机制 软件质量管理是CMM四级中一个独立的KPA,其目的是使项目的软件质量管理活动是有计划的、软件产品的质量目标是量化的和受到管理的。它遵循了全面质量管理活动的科学程序—PDCA(Plan、Do、Check、Action),即四个阶段:

(1) 计划:即确定质量目标以及实现这个目标需要采取的措施。制定质量计划是整个质量管理活动的基础。国家标准对质量下的定义为: 质量是产品或服务满足明确或隐含需要能力的特征和特性的总和。

对于软件来说,软件质量则体现在质量特性上,ISO/IEC9126中规定了6个质量特性,即功能性、可靠性、易用性、效率、可维护性和可一致性,每个特性包含若干子特性。设定质量目标就是要找到用户的质量需求与这些质量特性的相关性,并将其转化为开发过程中可度量的技术指标或能力指标,作为质量控制的依据。

上述的六大特性属于软件的外部属性,与用户满意度直接相关,可以根据组织的目标和项目的特点建立质量模型,并采用一定的方法,如QFD(Quality Function Deployment)、GQM(Goal Question Metrics)等确定量化的质量目标,但这在实际工作中往往是相当复杂和难以获得的。因此,更常用的做法是以过程能力目标反映产品质量目标,一个典型的能力指标就是缺陷密度(即每单位规模工作产品中存在的缺陷数)和相应的阶段缺陷排错率,可以根据历史数据估计产品的规模和目标缺陷密度,从而对每个阶段发现的缺陷数量进行控制。

(2) Implementation: actual implementation according to the predetermined plan, target measures and division of labor. In order to control the quality of software in the process, it is necessary to take corresponding measures to measure the quality of software work products at predetermined stage points or milestones. Commonly used methods include peer review, prototype evaluation, and testing. These methods mainly measure the quality of software from two aspects, one is internal attributes, that is, the attributes that can be measured by the process and activities themselves, such as the defect density of work products; the other is external attributes, that is, attributes related to the user environment, these attributes It is often difficult to measure in the process, and it can only be evaluated by introducing user testing in the early stage of the project, and allowing users to participate in the development process is greatly conducive to the improvement of product quality.

(3) Inspection: compare the implementation results with the requirements of the plan, check the implementation of the plan and the effect of the implementation, whether the expected goals are achieved, and find out the reasons. When analyzing the results of quality measurement, some statistical tools and methods are often used, such as checklists, histograms, control charts, Pareto diagrams, scatter diagrams, cause-and-effect diagrams, and operation diagrams. These tools can help identify problems, assess the status quo, discover causes and even shape next steps.

(4) Handling: Summarize the experience and lessons, and use the unresolved problems as the basis for making plans in the next stage. CMM requires that after analyzing the results of software quality measurement, "take appropriate measures consistent with the software quality plan, so as to make the quality measurement results of the product consistent with the software quality objectives".

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

I hope it will be helpful to your company's IT software development and quality management. 

Come on, testers! If you need to improve your plans, do it, it's better to be on the road than to wait and see from the beginning. Your future self will definitely thank your hard-working self now!

Guess you like

Origin blog.csdn.net/weixin_47648853/article/details/131052383