Test coverage in continuous delivery stage (translation)

Test coverage is a rare strategy to help us use a testing time in the appropriate priority. When is the last thing you have tested, how many of us covering automation, multi-user use this characteristic often, and how these are key factors to consider applications for this feature. Here are some maintain high quality when you turn the idea of ​​continuous delivery.

In the bad old days, we had a test for several weeks or several months of testing phase. We just start testing and find problems, but in the end, we had to start a fixed enough to consider the published version.

Testers gathered at the candidate, and we never have enough time to go to every corner of our thoughts on the software. Even if we do, in order to ensure all the features we want to use a balancing test - or user-centric use cases, or components, or demand - for this release is to test coverage.

Coverage of the idea was born.

After 20 years, I have worked with many teams no longer have the "testing phase." If they have, it is a half-day or a day, it may be up to a week. Some large companies from multiple teams to emphasize the nature of the test integrated controls, but they tend to be called this is a traditional activity, rather than an end state.

Test and some much more complex than previously accustomed to. We have covered unit testing, integration testing coverage, automated test coverage, and, yes, real people exploration and research coverage.

In that priority it is that we have a three-dimensional: time. Many software organizations I have worked for at least a daily version, not a continuous version. Almost does not work for fat version of the test candidates a week because people are busy submitting repair, often in the main branch - version candidate pushed from the same place. Continuous deployment staged, we are testing the appropriate development server in real-time changes.

Continued use of the product delivered, every single roll repair to the product - there is no "wait and test each thing" time.

What changed "deployment" means the

When the window procedure soon, they actually used to transport, on a disc in the box or in. We can collect the current version of all the files and use them as a bunch of deployment. Website changed all: a sudden, we just push a simple web page, and possibly some pictures, once to the product. If the page is isolated and the only risk is that it will go wrong, we do not need to restart the entire application.

Some of the most famous cases of early use of continuous delivery push really only static PHP files individually or in small groups. As long as the code does not change a code library or database programmers will roll back an error earlier, suddenly no longer requires a long, involved process of regression testing.

Micro Services provide us with a similar interest. As long as the service is associated with the main application and we know where to call service, and then we can stage testing services, where it interacts with the user interface, and rolling out - not a large-scale reorganization of the entire application.

Transition to continuous delivery

I have worked with many teams are trying to move to the micro-service, but things are not so simple. They did not do a push in the right location technology - button is linked to the deployment. If they do, then of course they will not be easily rolled back.

Rollback is often a life changing and forward deployment of components. It requires quite a change and associated structure to roll forward and not get out of the other submitted late. I worked for a company that has had this problem, and testers only review all the changes from the last advance.

Do not do that.

At the same time, the idea of ​​lost coverage. We pretend to live in this perfect world of independent service us, but the failure of demand remains high. Repair of a feature or component is likely to expose some of the other features. Until these "ripples changes" were eliminated, the only means to deliver sustained rapid roll out a bunch of corrupted code.

Bottom line: a team needs when they turned to how to test when continuous delivery of the game plan.

Track and risk characteristics

Today I say to see a list of all the team have automated test ideas and execute them properly before deployment - all Selenium tests, all the unit tests, and so on.

That question brings all the idea of ​​automation is too much and can not do, like switching command, print, change the size of the browser - that was ignored. They may test once for each story, and then forget. And, of course, it became the forgotten things that end bite us.

In the final burning process team, you can use a low-tech products based on the characteristics of the test board, assigning each feature a score from 1-5 (or crying face to smiling face) on how well they test. The next release, when deciding who to appoint, before the release of this version of the cover and take a look at what is touched, the product key, or just not well covered.

You can also write notes of emergency risks on fixed and put them on the wall, sorted by priority. Anyone can add what they want Renyi Dong west to the wall, the most serious problem came in the bottom. Every day, each member of the team from the wall and turned down at least one of these risks, test it, and move to other parts of the notes and the agreed date. Finally, you add those cards will be measured back to the top of the stack. This strategy can even work for sustained delivery - only pushed up the card. You may even start testing in the product!

Priority attention

Coverage is a rare test to help us spend time at the right priority strategy. When the last thing measured, and how much we have automated coverage, customers and more frequent use of this feature, and this feature of the application is more than a key factor for all to consider.

Instead of providing to you some pseudo-scientific test of how well the rules introduced everywhere, I try to find a pair of ideas and visual method to create a common understanding of the way to catch the problem.

Some teams will ignore the classic problem of coverage of small and independent deployment of components. For everyone else: We'd better get to work.

Guess you like

Origin www.cnblogs.com/fengye151/p/11519214.html