软件测试 - Fundamental & Black/White/Static/Dynamic

Chapter 1 Testing background

Terms

Fault/failure/defect – severe, dangerous.

Anomaly/incident/variance– unintended, not so negative.

Problem/error/bug – generic.

Product specification: a.k.a. spec/product spec. An agreement among the software development team, it defines the product they are creating, detailing what it will be, how it will act, what it will do and won’t do.

5 rules to identify a software bug

Software bugAll software problems can be called bugs. A software bug occurs when –

1) the software doesn’t do something that the spec says it should do;

2) the software does something that the spec says it shouldn’t do;

3) the software does something that the spec doesn’t mention;

4) the software doesn’t do something that the spec doesn’t mention but should;

5) the software is difficult to understand/hard to use/viewed as just plain not right.

 

The cost of Bugs are logarithmic,they increase tenfold as time increases.

Beta test: The development team send out a preliminary version of the software to a small group of customers (chosen to represent the larger market).

 

The goal of a software tester is to find bugs, find them as early as possible, and make sure they get fixed.

Chapter 2 The software development process

Deliverable: A software product component that’s created and passed on to someone else, can be categorized into –

·customer requirements

·specifications

·schedules

·software design documents(might include: architecture, data flow diagram, state transition diagram, flowchart, commented code)

·test documents(might include: test plan, test cases, bug reports, test tools and automation, metrics, statistics and summaries)

A software product makes up of – 

1) the code

2) supporting parts, e.g. help files, user’s manual, sample and examples, labels and stickers, product support info, icon and art, error messages, ads and marketing material, setup and installation, readme file.

Software project staff

Project managers/program managers/producers: drive the project from beginning to end, responsible for writing the spec, managing the schedule, and making the critical decisions and trade-offs.

Architects/system engineers: technical experts, design the overall systems architecture. Work vlovsely with the programmers.

Programmers/developers/coders: design and write software and fix the bugs. Work closely with the project managers and testers.

Testers/QA (Quality Assurance Staff): find and report problems. Work closely with all members.

Technical writers/user assistance/user education/manual writers/illustrators: create the paper/online documentation for the software product.

Configuration management/builder: put all the software and documentation together into a single package.

Software development lifecycle models

Software development lifecycle model: The process use to create a software product from its initial conception to its public release.

Big-bang model – Simplest.

Code-and-fix model – a rough idea -> simple design -> long repeating cycle of coding, testing and fixing -> at some point decide that enough and release.

Waterfall model – moves down a series of steps. At the end of each step, a review is held to determine if ready to move to the next step. 

Features of waterfall model: 1) large emphasis on specifying what the product will be;2) discrete steps, no overlap;3) no way to back up.

Pros: everything is thoroughly specified, the test group can create an accurate plan.

Cons: problems will not be detected until testing at the end.

Spiral model – start small with important features -> try and get feedback -> move on to the next level -> repeat till the final product. Testers are involved early in the development process and have the opportunity to find problems early.

6 steps of spiral model: 1) Determine objectives, alternatives, and constraints; 2) Identify and resolve risks; 3) Evaluate alternatives 4) Develop and test the current level; 5) Plan the next level; 6) Decide on the approach for the next level.

 

Chapter 3 the realities of software testing

Testing Axioms

1. It’s impossible to test a program completely.

2. Software testing is a risk-based exercise.

3. Testing can’t show that bugs don’t exist.

4. The more bugs you find, the more bugs there are.

5. The pesticide paradox – the more you test software, the more immune it becomes to you tests.

6. Not all the bugs you find will be fixed.

7. When a bug’s a bug is difficult to say.

Latent bugs: Bugs that are undiscovered or haven’t yet been observed.

8. Product specifications are never final.

9. Software testers aren’t the most popular members of a project team. 

10. Software testing is a disciplined technical profession.

 Software testing terms

Precision & Accuracy 

Precise: closely grouped.

Accurate: close to the target.

Verification & Validation

Verification: the process confirming that the software meets its specification.

Validation: the process confirming that the software meets the user’s requirements.

Quality & Reliability

Quality: A degree of excellence, to meet the customer’s needs.

Reliability: just one aspect of quality.

Testing & Quality Assurance (QA)

tester: aims to find bugs, find them as early as possible, and make sure they get fixed.

QA person: aims to create and enforce standards and methods to improve the development process and to prevent bugs from ever occurring.

 

Chapter 4 static black-box testing 

Black-box & White-box testing

Black-Box testing: a.k.a. functional testing/behavioral testing. The tester only knows what the software is supposed to do, can’t look in the box to see how it operates.

White-box testing: a.k.a. clear-box testing. The tester has access to the program’s code and can examine it for clues to help with testing.

Static & Dynamic testing

Static testing: to test something that’s not running – examine and review it.

Dynamic testing: to run and use the software. 

Static black-box testing

1)perform a high-level review of the specification, includes – 

 ·Pretend to be the customer

·Research existing standards and guidelines (test that the correct standards/guidelines are being used)

·Review and test similar software 

2)Test the specification at a low level, includes – 

·Specification attributes checklist 

complete, accurate, precise/unambiguous/clear, consistent, relevant, feasible, code-free, testable.

 ·Specification terminology checklist

always/every/all/none/never, certainly/therefore/clearly/obviously/evidently, some/sometimes/often/usually/ordinarily/customarily/most/mostly, etc./and so forth/and so on/such as, good/fast/cheap/efficient/small/stable, handled/processed/rejected/skipped/eliminated, if…then…(but missing else).

 

Chapter 5 Dynamic black-box testing

Dynamic black-box testing: a.k.a. behavioral testing. Testing software without having an insight into the details of underlying code. 

Test cases: The specific inputs you’ll try and the procedures you’ll follow when testing the software.

 Exploratory testing: To simultaneously learn the software, design tests, and execute those tests. A solution (to perform dynamic black-box testing) when you don’t have a spec. 

Test-to-pass & Test-to-fail – 2 fundamental approaches to testing software

Test-to-pass: assure that the software minimally works, don’t push its capabilities.

Test-to-fail: a.k.a. error-forcing. Design and run test cases with purpose of breaking the software.

When designing and running test cases, always run the test-to-pass cases first.

Test cases that attempts to force error messages straddle the line between test-to-pass and test-to-fail.

Equivalence partitioning

Equivalence partitioning: a.k.a. equivalence classing. The process of methodically reducing the huge (infinite) set of possible test cases into a smaller, manageable, effective set that still equally adequately tests the software.

 Equivalence class/equivalence partition: A set of test cases that tests the same thing or reveals the same bug.

 Equivalence partitioning can be subjective, 2 testers may arrive at 2 different sets of partitions.

A common approach to software testing is to divide the test work into 1) testing on dataand 2) testing on program flow (states).

 Data testing

 Test cases can be reduced by equivalence partitioning based on boundary conditions, sub-boundary conditions, nulls, and bad data. 

 Boundary conditions: situations at the edge of the planned operational limits of the software.

 When presented with a boundary condition, always test 1) the valid data just inside the boundary, 2) the last possible valid data, and 3) the invalid data just outside the boundary.

 Sub-boundary conditions: a.k.a. internal boundary conditions. The boundaries that are internal to the software, not necessarily apparent to an end user. e.g. Powers-of-two, the ASCII table.

 A separate equivalence partition should be created for default, empty, blank, null, zero and nonevalues.

Invalid, wrong, incorrect and garbage data is used for test-to-fail.

 

State testing

Software state: A condition or mode that the software is currently in.

 state transition map should show 1) Each unique state 2) The input/condition that take it from one state to the next 3) Set conditions and produced output when a state is entered/exited.

5 ways to reduce the number of states and transitions:

1. Visit each state at least once.

2. Test the state-to-state transitions that look like the most common or popular.

3. Test the least common paths between states.

4. Test all the error states and returning from the error states.

5. Test random state transitions.

Testing states and state transitions involves checking all the state variables.

  

Examples of testing sates to fail: race conditions, repetition, stress, load.

Race conditions / Bad Timing

Race condition: multiple processes racing to a finish line, and confuse software that didn’t expect to be interrupted.

1)Look at each state in the state transition map, think about what outside influence might interrupt that sate.

2)Consider what the state might do if the data it uses isn’t ready or is changing when it’s needed.

3)What if 2 or more of the connecting arcs or lines occur at the same time?

Repetition, stress, and load

Repetition testing – doing the same operation over and over (mainly to look for memory leaks).

Stress testing – running the software under less-than-ideal conditions (e.g. low memory, low disk space, slow CPUs, slow modems).

Load testing – operating the software with the largest possible data files, or running over long periods.

 

Other black-box test technique

1. Behave like a dumb user.

2. Look for bugs where you’ve already found them.

3. Think like a hacker.

4. Follow experience, intuition, and hunches.

 

Chapter 6 static white-box testing

Static white-box testing: a.k.a. structural analysis. The process of carefully and methodically reviewing the software design, architecture, or code for bugs without executing it. 

Pros: 

 1) find bugs early in the development cycle; 

 2) testers can gain information about how the software works, what potential weakness and risky areas exit;

 3) can find missing items as well as problems;

 4) can build a better working relationship with the programmers;

 5) Project status can be communicated to all team members who participate in the testing.

 Formal review: The process under which static white-box testing is performed, includes 4 essential elements – 1) Identify problems; 2) Follow rules; 3) Prepare; 4) Write a report.

Peer reviews: a.k.a. buddy reviews. A small group simply reviews the code together and looks for problems and oversights, are the least formal method of doing formal reviews. (make sure that the 4 key elements are in place)

Walkthroughs: 1) The reviewers (a group of other programmers and testers) receive copies of the software in advance; 2) The presenter (the programmer who wrote the code) formally presents it, reading through the code, explaining what it does and why; 3) After review the presenter write a report. Are at the next step up in formality from peer reviews. 

InspectionDifference from peer reviews/walkthroughs – The presenter/reader isn’t the original programmer, this obliges another person to fully understand the software. The inspectors (other participants) are tasked with reviewing the code from a different perspective. Some inspectors are also assigned tasks as moderator and recorder to assure the rules are followed and that the review is run effectively. Are the most formal type of reviews.

Coding standard & guidelines

Standards: established, fixed, have-to-follow-them rules – the do and don’ts.

Guidelines: The suggested best practices, the recommendations, the preferred way of doing things.

Generic code review checklist – in addition to comparing the code against a standard/guideline

·Data reference errors (the primary cause of buffer overruns)

 ·Data declaration errors

 ·Computation errors

 ·Comparison errors

 ·Control flow errors

 ·Subroutine parameter errors

 ·Input/output errors

 Other checks

 ·Language other than English; extended ASCII characters; Unicode

 ·Portability

 ·Compatibility

 ·“warning” or “informational” messages

 

 Chapter 7 Dynamic white-box testing 

Dynamic white-box testing: a.k.a. structural testing. using information gain from seeing what the code does and how it works to determine what to test, what not to test, and how to approach the testing.

Dynamic white-box testing & debugging

 Difference – The goal of dynamic white-box testing is to find bugs; The goal of debugging is to fix them. (overlap in the area of isolating where and why the bug occurs) 

Individual pieces of code are built up and tested separately, and then integrated and tested again.

 Unit testing/module testing: Testing that occurs at the lowest level.

 Integration testing: Testing that performed against groups of modules.

 System testing: Testing on the entire product – or at least a major portion of it.

 There are 2 approaches to incremental testing – bottom-up and top-down.

 bottom-up testing

 test driver: A module written to efficiently test a low-level module.

 Test drivers send test-case data to the modules under test -> read back the results -> verify if they’re correct. 

Top-down testing

Stub: small piece of code that sends test data up to the module high-level module.

It is important to create black-box testing cases based on the specification before white-box cases, or you will be biased into creating test cases based on how the module works.

 

Divide the code into data and its states (program flow). 

Data coverage

 1) Data flow coverage: involves tracking a piece of data completely through the software.  

At the unit test level this would just be through an individual module. Test a function at this low level can use a debugger and watch variables. 

2) Examine the code carefully to look for sub-boundary conditions and create test cases that exercise them.

3) Scour code for formulas and equations, create test cases and equivalence partitions for the variables thy use. 

4) Error forcing: Use the debugger to force some variables to specific values to cause error message.

Make sure you aren’t creating a situation that can never happen in the real world.

Code coverage

Code coverage testing: Attempt to enter and exit every module, execute every line of code, and follow every logic and decision path through the software.

Code coverage analyzer: A tool that can offer statistics that identify which portion of the software were executed and which portion weren’t. 

Statement coverage/line coverage:  A from of code coverage, can tell if every statement is executed.

Path testing: Attempting to cover all the paths in the software. The simplest form of path testing is branch coverage testing.

Condition coverage testing takes the extra conditions on the branch statements into account.

猜你喜欢

转载自www.cnblogs.com/RDaneelOlivaw/p/9695454.html