Performance appraisal testers to see what you do not

  In the project, testers assessment often becomes a problem project manager and test manager, how to assess the work of testers? How to define differences in the quality of the test? By testing for many years engaged in the collection of data and limited information online reference different project analysis, summed up the thinking, a practical method in this available to everyone.

       For a long time, how to assess the work of testers is full of controversial topics, the method is an idealized defect phase of the project to determine the quality of system testing after the collection of the test phase. However, the inoperability of this approach: First, maintenance and implementation phases of the defect is difficult to collect; the second is defective throughout the product life cycle, not exhaustive, it is difficult to compare time periods separated; the third is the cost is too large, long time span, no effect of efficiently exciting. Can you find ways to evaluate the work of testers do during the project? In this this idea, it worked out an effective way.

   At the outset, the first, sent us this assessment estimates more than 10,000 projects through practice, but for small projects, may lack sufficient data and the need to function in a point; second, the project team within the assessment the success does not mean that in the test department can adopt a similar assessment methods, providing only one reference method, the assessment department might consider assigning more importance and size of work put into the project task; third, in addition to quantitative indicators, the test staff working attitude, work initiative and willingness to learn techniques to get through qualitative analysis.

  Project team personnel assessment tests include the efficiency and quality of work two blocks, the working efficiency for inspection activities, and the quality of work output for the study of substance. Since the assessment based on testing process, it is necessary to be carried out after the end of the process. Of course, since the project is distributed submit test monthly monthly assessment can be carried out according to the actual situation, after the end of the end of the project or task in uniform examination. According to the traditional test cycle, the testing process is divided into: test plans, test design and test execution three aspects. Test Manager test plan belongs to the category of discussion at the end. Testers are mainly test design and test execution, test manager's assessment may be included in the assessment of the testers, of course, this part of the assessment may be incorporated into the project group. Assessment indicators are as follows:

 

A test design

 

Efficiency-related indicators

Documents yield the index value for the main test case document pages in addition to the effective time of the preparation of the document obtained. Size for productivity study testers test case documentation.

  Formula: Σ test case document pages (page) / Σ write test cases document effective time (hours)

  Reference index: According to the project summary gives an average of about 1.14 / hr, preferably higher than this value, the difference is less than this value.

Example yield a  supplemental main index value that is the index value for the size of the inspection test yield testers. Redundancy test document pages may contain more, the number of test cases and therefore the document you want to view. Test test case is to document the total number and number in addition to the effective time of the preparation of the document.

   Formula: Σ the number of test cases (a) / Σ write test cases document effective time (hours)

   Reference index: 4.21 average use case / hr

 

Quality of work-related indicators

Demand coverage  calculated the total number of cases and in addition to one correspondence with the function of the number of points and, mainly to see whether there has been missing function points test.

   Equation: Σ number of test cases (a) / Σ feature point (a)

   Reference index: 100%. If even function parameters can not meet 100% coverage, at least, that testing is not sufficient. This indicator collected quite difficult if there is demand tracking matrix or a test management tool can use cases and needs one to one much easier.

   Note: Some features are difficult to test, then failed to cover needs to be a comprehensive analysis, the testers are clearly missing? Or can not be tested? This problem needs to put the tracking follow up table; Further, some information contained in the function points or more with some embodiments include several function point, then only the repeat function point or by a repeated use case basis, it is difficult to We do distinguish instructions.

Document quality  test for peer review and review the number of defects found, or the number of defects was calculated in this document in addition to the ratio of the number of pages. This index testers examine how a document written in quality.

   Formula: Σ defects (peer review and evaluation) (a)

   Σ the number of defects (assessment and peer review) (a) / Σ test case document pages (page)

   Reference Index: the number of defects due to the review is found is not fixed, therefore, the value of the reference index is not available. If the number of defects can not be directly used to compare the size of the defect on the use / page comparison horizontal manner.

Documentation efficient  test system the number of defects found during testing using a test document in addition to the number of pages in this document. Documents for inspection by the effective guidance of the test work.

   Formula: Σ the number of defects (System Test) (a) / Σ test case document pages (page)

    Reference index: average 2.18 defects / page

    Note: If the tester exists to create a new document should be used to assist in testing included in this part of the test.

Example efficient use  using all other defects found in the test case the total number of test cases. This indicator is a supplementary indicators on the index, for use cases investigated whether higher quality

 Formula: Σ defects (system test) (a) / Σ number of test cases (a)

 Reference index: average 0.59 defects / use cases, that is to say, each executed two use cases before they get a defect, each project is different, you can look at their own practices

 

Two test execution

 

Efficiency-related indicators

 The efficiency of the total test time of use to the other pages of the document test execution system (not including the time of writing the document use cases). Additional indicators is the sum of the number of time to use the other embodiment of the test system. For obtaining a work speed testers perform tests per hour.

   Formula: Σ test case document pages (page) / Σ effective implementation of system test time (hours)

   [Sigma Test Cases (a) / Σ effective system test time (in hours)

   Reference Index: Average 0.53 / hour, 1.95 use cases / hr. That testers execute test cases every hour or hour and a half pages to perform two test cases. Through horizontal comparison, easy to know who the members of the higher efficiency. Note: high efficiency test does not mean that the quality is high, and even test the efficiency and quality is inversely proportional to the quality of the work behind the index will supplement deviate from this section. Actual results show that patients with high efficiency with the members of its defect discovery rate is often low, if not included this assessment can also come as an important test to improve data collection.

 

Progress deviation  progress inspection program time and real time, by subtracting the actual time difference between the scheduled time difference divided by the sum of the actual work for the progress of the investigation testers, testing whether the monitoring carried out in accordance with the schedule, meets the requirements of the project schedule .

   Formula: Σ (scheduled start time - the actual start time) + Σ (scheduled end time - the actual end time) / Total working hours

   Reference index: 15% deviation is a relative progress indicators may deviate from the 20 working days, but for a test period of up to six months deviate from the number of days the number of days is less than 15% over the whole test required, may deviate from the three working day, but only for a one week test period has exceeded 60% of the entire testing phase required number of days.

   Note: the numerator and denominator to be consistent with the calculation that start or end time has removed the non-working day period, the total man-hours to go until the workday. Because when the plan is based on each company's working to develop, that is considered non-normal workday schedule.

 Assessment test schedule is also very important step, if there is no progress to ensure that all tests are at risk, the first approach is bottom-up approach testers can use when planning a test report to the manager, in this way the risk is relatively small, according to their own personal ability to determine the size, but the disadvantage is the possibility of false test personnel. Another approach is to assign the work schedule after test managers to estimate, at this time estimate is very important premise, in addition to the test depends on the experience of the manager, the peer review evaluation result is very desirable objective method.

 Defect detection percentage  sum of the number of defects found in each of the testers test time divided by the sum of the respective spent. Because of the efficiency of testers can not represent sufficient seriousness to work, then the number of defects found per hour is an important assessment targets, your work can get feedback through this indicator.

 Formula: Σ defects (system test) (a) / Σ effective system test time (in hours)

 Reference index: average of 1.1 defects / hour if there was not reached 1 hour testers found a flaw, then, unless the high product quality, smaller module, otherwise, is his ability to find defects than other testers. Of course, detailed classification can be based on the number of defects found important to define the defects found ability.

 

Quality of work-related indicators

 Effective number of defects  / the sum of the number of defects is rejected and deleted, the total number of defects is rejected or deleted, and in addition to the total number of defective. This metric is used to investigate testers found, it is identified as defective or percentage of the number of defects low, the higher the number and the ratio of the lower quality test.

   Equation: Σ number of defective (rejected and deleted by the system test) (a)

   [Sigma defects (to be rejected and deleted system under test) (a) / Σ defects (system test) (a)

   Reference index: (100 defects per testers found an average of 22 defects were confirmed development group that is not a "defect" or error entry defect) averaged 21.9%. Effective defect ratio easily given, but the effective number of defects according to specific data items, the reference values ​​can not be given.

 Note: This metric may be not correct, because if defects are rejected and deleted not because of misuse and testers understand the needs of their own errors and other causes, but the system itself can not be achieved or data errors caused, then they would consider removing this part. For testers found that fundamental framework, initialization parameter setting error caused by erroneous data, error due to the environment and developers can not fix, you can modify the program without the need to change the environment by re-importing data, released again to reject or delete defects, test personnel should be given this award.

 

Serious defect rate  this proportion is insufficient to compensate for the defect discovery rate. Mainly based on the number of valid defect severity classification of defects or number of defects than all. Generally, each company to the basic defect severity into critical, and generally fine or finer (typically an odd number of levels). Further, the severity of the defect may be translated (severe: General: minor = 1: 3: 5) can be obtained by conversion weights then calculating scores tester, which is not redundant described

 Formula: Σ serious / general / small / Σ the number of defects

 Σ serious / general / small / Σ effective number of defects

 Reference index: serious ~ 10% ~ 70% is generally small to 20%. When the defect testing found serious errors in the higher ratio, the test quality is relatively good, the severity of the distribution of the number of defects is generally a normal distribution.

 Module defect rate of  the index is mainly based on the number of defects in addition to a separate test module to the module itself get out of function point. If a module is tested individually, it is easy and may be other modules lateral index contrast, the corresponding reference tester, the module number of defects derived tests will test to test levels, also provide for the development of the assessment data.

 Equation: Σ number of defects (System Test (a) / function points (a)

   Σ defects (System Test (a) / sub-function point (a)

 Reference index average 3.74 defects / defect a functional point / point functions

 Note: Some sub-function point function point is not to be explained when calculating the sub-function point.

 

Three test management

 Mentioned at the beginning of the assessment test manager on the complex, in addition to the test manager to participate in the test design and implementation, but also examine his test management capabilities that test planning phase of work, which

 Program quality  assessment ratio of the number of defects or test plan, can be compared with other similar projects or databases average index.

 Formula: Σ defects (peer review and evaluation) (a)

   Σ the number of defects (assessment and peer review) (a) / Σ test plan document pages (page)

 

The cost of quality  cost metric primarily on the work piece. Because both involve wages or bonuses, and the workload should hang relations. The main cost is the sum of the quality of the testing activities planned workload than the value of the sum of the actual workload. Assessment of progress on the tester deviate progress has been considered factors, the workload involved is the cost factor.

   Formula: Σ testing activities planned workload (estimated man-days) / Σ testing activities of the actual workload (person-days)

   Reference Index: in principle it can deviate from the plan ± 15% ~ ± 20%. In fact, this indicator is a measure of the cost. For a large project, estimates are often very large gap, there may be ± 500% when the phase count! ! At this adjustment program it is necessary, in the final stage consider taking calculated average estimates. A test manager must be effective control of costs to complete the task.

 These two indicators is relatively easy to quantify the part, but you need to add other quantitative indicators must be considered given by the project manager and manager of the Department testing standards, such as the management of a ratio (overall project management time during testing accounts for the entire project total test time) , the overall number of defects, etc. the system compares with other similar projects or databases average index.

 

Specific assessment methods:

  1 . The pooled analysis of the indicators, obtained the sum of the table, a chart made according to the size of the indicators testers, listed as 1, 2, 3, 4.

 2 . Determining the stage of heavy weights involved. For example, the test design and test execution weight of 50% each. Among them, the efficiency of 40% (that is, where the phase accounting for 20%), quality of work accounted for 60% (that is, where the phase accounting for 30%).

 3. Determining a score for each type of index, and each category index to reach 100% of the average standard, reach or exceed 80% to 120% according to the ratio points

 4. After the score figured out a comprehensive assessment, if necessary, add some adjustment factor.

 5. The best qualitative analysis included in, the project manager questionnaire and scoring system scores are given qualitative indicators, the proposed decentralization of this weight should not exceed 10% to 15% in order to ensure measurable test assessment.

 When given all assessment scores, reminded that it is, as do the assessment, it is necessary to open these results, and the assessment has oriented, do not let the pursuit of misleading assessment of the quality of the work is the most important.

 

Assessment Note:

 

1 . Not a month to complete the project, such as monthly, to be considered "part of the assessment can be" for those, those indicators can pick horizontal comparison, then in stages, sub-task assessment.

 2 . The length of time involved in testing should also be given attention, in addition to the quantitative indicators, testers put the whole length of time is also very important, but also overtime as a special consideration, and perhaps some testers participated in the test execution only three hours, the indicators are good, but can not give him more points than any other personnel involved in longer. This part of the reason is to increase the adjustment factor.

 3. Evaluation test designed to test manager and operative with and project testers, but the test management to separate assessment as a further bonus, or incorporated into the article as the foregoing evaluation items were given. Because the test manager plays in the project managers and quality assurance tests the role of the person in charge, not to test him and other engineers of equal treatment.

 4. Before the assessment must consider the actual situation of the project, do not make rash promises blind test group of staff assessment and will be linked to salary or elimination mechanism, otherwise the assessment will play the opposite effect.

 

The main purpose of the project team personnel assessment test is a test group of testers is to inspire, encourage those who can afford, spur lagging behind; in addition, can also play to find talent and find inadequate role. That assessment should reflect the principles of hard work, but also embodies the principles of fairness and rationality, penalties and rewards in order to effectively promote the progress of quality management. To assess get satisfactory results, an important prerequisite for the above-mentioned method are: must be fully collected in the project-related data, including the number of defects collection, recording hours of work, to submit detailed work logs and configuration for document management, without these data, quantitative analysis of the question, testers assessment could take place.

Non-technical management is not good management, accompanied by relevant information for interface testing read:

Link: https: //pan.baidu.com/s/1R1gIyktM2CqTSlZSnUppWQ
extraction code: rn4z

Guess you like

Origin www.cnblogs.com/z1201-x/p/11227029.html