Day944. Metrics - System Refactoring in Practice

metrics

Hi, I am 阿昌, what I learned to record today is about 度量指标content.

Many times in the research and development process, we habitually look at a thing in the way of "slapping the head". For example, the code is not well written, the automated test coverage is insufficient, the release frequency of the version is too poor, and so on. Often only know where there is a problem, but do not know how to find out the root cause and really improve. For this case, we need to introduce metrics.

Through the measurement indicators, the goal can be made clearer in the research and development process, and it can avoid going in the opposite direction at the beginning. In addition, after the phased work is completed, the completion situation can be fed back through continuous measurement to help us continue to improve . In software development, there are a large number of metrics at each stage from requirements to online operations. Previously, automated testing provided many metrics from the perspective of the life cycle.


1. Development indicators

Let's start by looking at development-related metrics. Metrics that usually have many problems are cyclomatic complexity, code repetition rate, invalid code lines, and code coupling.

Among them, cyclomatic complexity, code repetition rate, and invalid code have been mentioned in the five typical code smells of legacy systems . This focuses on the purpose of indicators and how to use these indicators in practice.

1. Cyclomatic complexity

Cyclomatic complexity refers to the code 嵌套复杂度. The measurement purpose of this indicator is obvious. If the cyclomatic complexity is high, it means that the nesting complexity of the code is too high, which is not conducive to reading comprehension and testing.

The problem of many legacy systems is the lack of refactoring, resulting in a large number of huge classes and methods, which are very prone to problems whether it is extending functions or modifying logic.

In practice, it is generally combined with the assembly line to check in the static code scanning link in the quality access control.

Generally speakingThe cyclomatic complexity cannot exceed 10, and the smaller the better

When the submitted code does not meet the requirements, the code integration is restricted.


2. Code repetition rate

Duplicated code refers to the presence of the same line of code in more than two places in the code. The role of this indicator is also very obvious. If there is a lot of repeated code, especially two identical classes with only individual code line differences, when the logic of the same part changes, it will lead to the need to modify the code in multiple places, which is called " " 散弹式修改.

In the process of practice, it can also be combined with the pipeline to check the code repetition rate in the quality gate control,It is generally recommended not to exceed 5%. When the submitted code does not meet the requirements, it is also necessary to restrict the integration of the code.


3. Invalid line of code

Invalid code refers to content such as classes, methods, or resources that are not called. Although invalid code will not affect functionality, it will affect the overall experience of reading the code, especially the difficulty of understanding the code. Invalid code is also a very common problem in legacy systems.

It can also be combined with the pipeline for static code inspection. If any invalid code lines are found, it will be fed back to the relevant personnel and asked to modify it. In the daily development process, you should also pay attention to the prompts of the IDE to ensure the cleanliness of the code before submission.


4. Code coupling degree

So what is considered coupling? Coupling here refers to everything that does not conform to the architectural rules 代码调用关系就算耦合.

For details, please refer to the coupling analysis of Sharing projects in Using automated tools to diagnose and analyze Sharing projects .

Of course, there is a premise here, there must be architecture design and architecture rules, because if there are no rules, there will be no basis for judging coupling. If the code has the above-mentioned coupling situation, it means that the current code does not conform to the design rules of the architecture, and the architecture will continue to deteriorate over time.

In practice, it is necessary to turn the architecture guard into an automated test case and add it to the quality gate control of the pipeline, so that when there is coupling in code submission, it can be discovered and fed back in time.


2. Automated test indicators

Automated testing requires investment in design and maintenance. If the written automated test cases are not executed, it is equivalent to a waste of effort. So continuous measurement for automated testing is a very important job.

Let's take a look at the four commonly used key indicators related to automated testing metrics, namely the number of test cases, execution frequency, execution time, and execution success rate.

1. Number of test cases

The number of test cases refers to the total number of automated test cases that are continuously executed.

By observing this indicator, you can see the changes in the overall automatic testing investment of the project.

Under normal circumstances, the number of tests for effective automated test cases should continue to increase as the business iterates. If there are large fluctuations in the number of tests, analysis should be focused on.

The number of test cases can be obtained through the test report, and then combined with continuous integration tools to count the changes in the number of use cases.


2. Execution frequency

Automated test execution frequency refers to the number of times automated tests are executed per day.

Both the design and maintenance of automated testing require costs, so only frequent execution can exert its value.

Usually automated tests are integrated into the continuous integration pipeline. Therefore, you can see whether the automated tests are fully executed by observing the execution frequency of the automated tests.


3. Execution time

Automated test execution time refers to the time required to execute a set of automated test cases.

An important goal of automated testing is to verify feedback, so the shorter the feedback cycle, the greater the effect. Therefore, the execution time of automated test cases should be continuously observed, and this data can also be obtained through test case reports.

In practice, if you find that the execution time of individual use cases is very long, you should recheck. Generally speaking, the execution time of small tests is milliseconds, and the execution time of medium and large tests is seconds.


4. Execution success rate

The success rate of automated test execution refers to the number of test cases passed after the automated test is executed divided by the total number of test cases.

If there is a use case execution failure, it should be analyzed in time to eliminate the problem of introducing or destroying business logic. In practice, avoid commenting out failing cases in order to make the code mergeable. In addition, the success rate of automated test execution can also reflect the quality of development code submission to a certain extent. It is recommended to continuously observe this indicator.


3. Pipeline indicators

When the team uses the pipeline to integrate and release software, the execution of the pipeline directly reflects the efficiency of the team and the quality of the version. Therefore, in the process of practice, we must continue to pay attention to the relevant indicators of the pipeline operation.

The four key indicators commonly used in the pipeline are build frequency, build time, build success rate, and average recovery time.

Regarding these four indicators, common continuous integration tools have plug-ins to support statistical queries.

1. Build frequency

Build frequency refers to the average frequency of continuous integration pipeline execution over a period of time.

If the average daily execution frequency of the trunk is less than 1 time, it proves that the team's code integration frequency is very low. When this happens, it is necessary to check whether the task splitting granularity is too large, or the developer did not integrate the code in time. At least one needs to ensure that the trunk can successfully build the latest available version every day.


2. Construction time

Build time refers to the average execution time of the continuous integration pipeline over a period of time. This indicator is related to the efficiency of developer code integration. In the actual team coaching process, I have encountered that the average execution time of the continuous integration pipeline is close to 2 hours, which in turn will affect the willingness of developers to merge code and become another bottleneck.

For the problem that the execution time of the pipeline is too long, it is necessary to check the specific time-consuming steps first, and then solve them in a targeted manner. In addition, concurrent and hierarchical forms can also be used to improve efficiency.


3. Construction success rate

The build success rate refers to the ratio of the number of successful executions of the continuous integration pipeline divided by the total number of executions within a period of time.

If the execution success rate is very low for a period of time, such as less than 60%, after checking out some environmental factors, it proves that the quality of the code submitted during this period is not high, you need to analyze the failed tasks in detail, and Adjust in time.


4. Average recovery time

The average recovery time refers to the average interval time between failure and success of the continuous integration pipeline within a period of time.

Through this indicator, the team can pay more attention to the operation of the pipeline. Because this indicator can be used to judge the developer's compliance with the continuous integration discipline, for example, whether it can be repaired immediately after the pipeline execution fails.


Four. Summary

Metrics can help clarify direction, give timely feedback on results, and drive continuous improvement.

Usually in the project, measurement-related kanbans are built to continuously observe the changes in the data, and at the same time, the data is reviewed at the team's regular review meetings to set improvement goals.

It is not recommended that the team incorporate metrics into KPIs, as this can easily lead to another extreme and lose the key meaning of metrics.

The definitions, purposes, recommended thresholds, and trends of the metrics below are summarized in a table. Some general recommended reference thresholds are given, and the situation may vary depending on the specific product.

insert image description here


Guess you like

Origin blog.csdn.net/qq_43284469/article/details/130141103