An introduction to unit testing

In the daily R&D process, unit tests are generally written by R&D. With the emergence of large models, code analysis technology and AI can be used as test development to intelligently generate single test cases, reducing R&D costs and shortening the entire product delivery. cycle
Unit testing is often in the code writing stage. Compared with the testing stage, it can be found at low cost and more problems. With the growing call for testing to the left, perfect, effective, high-coverage and comprehensive unit test cases are still Very valuable, let's take a look at what a single test is

Why write tests

Times are asking us to write tests

The programmer's scope of responsibility is expanding a little bit, and the key reason is that software development is becoming more and more complex .

Testing allows us to move forward steadily in increasingly complex software development. On the one hand, when writing new functions, tests can verify the correctness of our code, allowing us to have stable modules one by one. On the other hand, testing can help us keep coming back in the long-term process, making each step more stable.

There is a joke about testing: every programmer wants to have tests when modifying code, but when writing code, he doesn't want to write tests.

Writing tests pays off

The father of XML, Tim Bray, recently had a funny saying in his blog: "Code without writing tests is like going to the toilet without washing your hands... Unit testing is an essential investment in the future of software." Specifically, unit What are the benefits of testing?

  • It is the easiest test to guarantee 100% code coverage.

  • It can greatly reduce the tension index when going online.

  • Unit tests can find problems faster (see the left of the figure below).

  • Unit testing is the most cost-effective, because the later the error is found, the higher the cost of fixing it, and the difficulty increases exponentially, so we should test as early as possible (see the right of the figure below).

  • Coders, and generally the main performers of unit testing, are the only ones who can produce bug-free programs, and no one else can.

  • It helps to optimize the source code, making it more standardized, providing quick feedback, and refactoring with confidence.

picture

picture

This picture comes from Microsoft's statistics: Bugs are found in the unit testing phase, and it takes an average of 3.25 hours. If it is leaked to the system testing phase, it will take 11.5 hours. This picture is intended to illustrate two problems: 85% of defects are generated in the code design stage, and the later the stage of bug discovery, the higher the cost and the exponential increase.

Although unit testing has such benefits, in our daily work, there are still many projects whose unit testing is either incomplete or missing. The common reasons are summarized as follows: the code logic is too complex; it takes a long time to write unit tests; the task is heavy, the deadline is tight, or it is not written at all.

Based on the above problems, compared with the traditional JUnit unit test, today I recommend a test framework called Spock. At present, most of the back-end services of Meituan Optimal Logistics Technology Team have adopted Spock as the testing framework, which has achieved good benefits in terms of development efficiency, readability and maintainability.

However, the online Spock information is relatively simple, even including the demo on the official website, which cannot solve the problems faced by complex business scenarios in our project. After in-depth study and practice, this article will share some experience, hoping to help everyone improve the efficiency of development and testing.

what is a test

From self-test to automated test framework

The widespread popularity of testing frameworks is due to the automated testing framework JUnit, authored by Kent Beck and Erich Gamma . Kent Beck is the founder of Extreme Programming and is well-known in the field of software engineering, while Erich Gamma is the author of the famous "Design Patterns", and Visual Studio Code, which many people are familiar with, also has his major contributions.

Once, Kent Beck and Erich Gamma flew from Zurich to Atlanta to participate in the OOPLSA (Object-Oriented Programming, Systems, Languages ​​& Applications) conference. On the flight, two people paired programming and wrote JUnit

Introduction to Testing Framework

Junit Tutorial: https://www.baeldung.com/junit

There are two key points for us to understand the test framework, one is to understand the structure of the test organization, and the other is to understand the assertion. Mastering these two points is enough to deal with most of the daily situations.

test structure

Everyone should be familiar with how JUnit expresses test cases

@Test
public should_work() {
  ...
}

For example, the same initialization code is written repeatedly. Due to the particularity of the test, these initialization codes need to be executed before each test. In order to solve this problem, JUnit introduced setUp to do initialization work.

@BeforeEach
void setUp() {
  ...
}
  • @TestFactory – denotes a method that's a test factory for dynamic tests

  • @DisplayName – defines a custom display name for a test class or a test method

  • @Nested – denotes that the annotated class is a nested, non-static test class

  • @Tag – declares tags for filtering tests

  • @ExtendWith – registers custom extensions

  • @BeforeEach – denotes that the annotated method will be executed before each test method (previously @Before)

  • @AfterEach – denotes that the annotated method will be executed after each test method (previously @After)

  • @BeforeAll – denotes that the annotated method will be executed before all test methods in the current class (previously @BeforeClass)

  • @AfterAll – denotes that the annotated method will be executed after all test methods in the current class (previously @AfterClass)

  • @Disable – disables a test class or method (previously @Ignore)

affirmation

Let's look at the second key point of understanding the test framework, assertion. The test structure ensures that the test cases can be executed as expected, and the assertion ensures that our tests need to have a goal, that is, what we want to test.

Assertion, to put it bluntly, is to compare the execution result with the expected result. If executing a test doesn't even have expectations, what exactly is it testing? So, we can say that a test without assertions is not a good test.

Almost every testing framework has its own built-in assertion mechanism, such as the one below.

assertEquals(2, calculator.add(1, 1));

This assertEquals is the most typical assertion, and it is almost the most used assertion. Many testing frameworks in other languages ​​have also moved it intact. But this assertion has a serious problem. If you don't look at the API, you can't remember which should be the expected value and which should be the actual value returned by your function. This is a typical API design problem, which makes it difficult to use well.

Therefore, a large number of third-party assertion libraries have emerged in the community, such as Hamcrest, AssertJ, and Truth. Among them, Hamcrest is a function composition-style assertion library, which was once built into JUnit 4, but out of the encouragement of community competition, JUnit 5 moved it out again. The following is a piece of code using Harmcrest.

assertThat(calculator.subtract(4, 1), is(equalTo(3)));

AssertJ is a smooth-style library with good scalability. It is also the library we chose in the previous combat part. The following is a piece of code using AssertJ.

assertThat(frodo.getName()).startsWith("Fro")
                           .endsWith("do")
                           .isEqualToIgnoringCase("frodo");

Truth is an assertion library open sourced by Google. It is very similar to AssertJ. It supports Android programs better. I also put a piece of code, which is exactly the same as AssertJ in style.

assertThat(projectsByTeam())
    .valuesForKey("corelibs")
    .containsExactly("guava", "dagger", "truth", "auto", "caliper");

Assertions include not only the processing of return values, but also other special cases. For example, assertions can also be made when exceptions are thrown. This is a built-in exception assertion in JUnit 5. You can refer to it.

Assertions.assertThrows(IllegalArgumentException.class, () -> {
  Integer.parseInt("One");
});

For specific situations where assertions can be made, you can refer to the API documentation of the assertion library you use.

Finally, there is an assertion that is not in these assertion libraries, and that is an assertion provided by the Mock framework: verify.

Regarding the Mock framework, we will talk about it later, but here is a brief mention that the function of verify is to verify whether a function has been called. In some tests, the function neither returns a value nor throws an exception. For example, taking saving an object, the only way we can judge whether the saving action is executed correctly is to use verify to verify whether the saved function is called, as shown below.

verify(repository).save(obj);

Test Specification

How to ensure the correctness of the test?

Since it is not feasible to write tests for tests, the only feasible solution is to write tests so simple that they are clear at a glance without proving their correctness . From this, we can know that a complex test is definitely not a good test.

What should a simple test look like? Let's look at an example together, which is the first test we gave in the actual combat session.

@Test
public void should_add_todo_item() {
  // 准备
  TodoItemRepository repository = mock(TodoItemRepository.class);
  when(repository.save(any())).then(returnsFirstArg());
  TodoItemService service = new TodoItemService(repository);
  
  // 执行  
  TodoItem item = service.addTodoItem(new TodoParameter("foo"));
  
  // 断言  
  assertThat(item.getContent()).isEqualTo("foo");
  
  // 清理(可选)
  
}

I divided this test into four sections, namely preparation, execution, assertion, and cleanup . These are also the four stages that a general test will have. Let's take a look at them separately.

Prepare. This stage is for some preparations for testing, such as starting externally dependent services and storing some preset data. In our example, it is to set the behavior of the required components, and then assemble these components.

implement. This stage is the core part of the whole test, triggering the behavior of the target under test. Generally speaking, it is a test point, and in most cases, the execution should be a function call. If you're testing an external system, you're making a request. In our code, it just calls a function.

assertion. Assertions are our expectations, which are responsible for verifying that the results of execution are correct. For example, whether the system under test returns the correct response. In this example, we are verifying that the content of the Todo item is what we added.

clean up. Cleanup is a possible part. If external resources are used in the test, they should be released in time in this part to ensure that the test environment is restored to an original state, as if nothing had happened. For example, we insert data into the database during the test, and after execution, we need to delete the data inserted during the test. Some testing frameworks already provide support for some common situations, such as the temporary files we used before.

If the preparation and cleanup are common among several test cases, they may be put into setUp and tearDown to do it.

Of these four phases, the ones that must exist are Execution and Assertion. Think about it too, if you don’t implement it, you don’t have any goals, so what else to measure? Without asserting, there is no expectation, and running is also a waste of time. The cleanup part probably wouldn't be there if some resource release wasn't involved. For some simple tests, no special preparation is required.

A Journey (A-TRIP)

With a basic understanding of the test structure, let's go a step further and see how to measure whether a test is done well? Someone summed up the characteristics of a good test into one statement: A-TRIP . This is actually an abbreviation of five words, namely:

  • Automatic, automation;

  • Thorough, comprehensive;

  • Repeatable, repeatable;

  • Independent, independent;

  • Professional, professional.

What does it mean? Let's explain each separately.

Automatic, automatic. After the explanation in the previous lecture, you should have understood this point easily. Compared with traditional testing, the core enhancement of automated testing lies in automation. This is why the test must have an assertion, because only when there is an assertion , the machine can help us judge whether the test is successful .

Thorough, comprehensive. This is actually a test requirement, and tests should be used to cover various scenarios as much as possible. No matter what kind of automated testing, its essence is testing. We talked about learning from testers earlier. The key point is that this helps us write more comprehensive tests. There is another angle to understand comprehensively, that is, test coverage . We have already seen in the actual combat session how to use the test coverage tool to help us find places in the code that are not covered by the test.

Repeatable, repeatable. It requires that the tests can be run repeatedly, and the results should always be the same. This is the premise to ensure that the test is simple and reliable. The idempotence of unit tests needs to be guaranteed .

Tests performed in memory are generally repeatable. The main factor affecting the repeatability of a test is external resources. Common external resources include files, databases, middleware, third-party services, and so on. If these external resources are encountered during the test, we have to find a way to restore these resources to their original appearance after the test is over. You have already seen how to deal with files in actual combat, and we will also talk about how to deal with databases in the later application chapters. Simply put, after the test is executed, the data can be rolled back.

There is another angle to understanding repeatability, that is, a batch of tests should also be repeatable. This requires that the tests do not depend on each other, which is another feature of the tests we will discuss next.

Independent, independent. **** There should not be any dependencies between test and test . What is dependency? That is, one test depends on the results of another test run. For example, both tests depend on the database. The first test writes some data into the database when it runs, and the second test uses these data when it is executed. In other words, the second test must be executed after the first test is executed, which is called dependency.

Repeatability and independence are very closely related. Because we usually think that repeatability is that the tests are executed in a random order, and the results are the same, which depends on the tests being independent. And once the test is not independent and has dependencies, it also violates repeatability from a single test point of view.

Professional, professional. This point is missing in many people's minds. The test code is also code , and it must be maintained according to the code standard. This means that your test code should also be clearly written, such as good naming, small function writing, refactoring and even abstraction of the basic library of the test and the mode of the test.

How to write testable code

If the quality of every piece of material used to build a building cannot be guaranteed, do you dare to ask for a high-quality building in the end?

This is the embarrassing scenario faced by many teams: each module has not been verified, only knowing that the system can work when integrated. So, once a system works, the best thing to do is leave it alone. However, there's a whole host of new requirements queuing up.

Correspondingly, for a well-testable system, each module should be able to be tested independently

To improve the testability of software, the key is to improve the design of the software and write testable code.

Write composable code. From this signpost, we draw two inferences:

  • Do not create objects inside components;

     static class A {
      private AdCampaignStateMachineTest b=new AdCampaignStateMachineTest();
    
    //  public A(AdCampaignStateMachineTest b) {
    //   this.b = b;
    //  }
    
      public static void main(String[] args) {
          //推荐
    //   A a = new A(new AdCampaignStateMachineTest());
    //   a.xx();
          //不推荐
    //   A a = new A();
    //   ReflectUtil.setFieldValue(a, "b", new AdCampaignStateMachineTest());
    //   a.xx()
      }
     }
    
  • Don't write static methods.

    Mockito cannot mock static methods. The advantage of using the object method is that it is convenient for Spring DI.

By not writing static methods, we can deduce:

  • don't use global state;

  • Do not use the Singleton pattern.

In actual work, in addition to writing business code, you will also encounter third-party integration:

  • For the case of calling the library, we can define the interface, and then give the implementation of calling the third-party library, so as to achieve code isolation;

  • If our code is called by the framework, then the callback code is only a thin layer, responsible for forwarding from the framework code to the business code.

Mock framework

The test is not easy to test, the key is the software design problem. A good design can isolate many implementation details from business code (such as using DDD).

An important reason for isolation is that these implementation details are not so controllable. For example, if we rely on a database, we need to ensure that only one test is used in the database environment at the same time. In theory, this is not impossible, but the cost will be very high. For another example, if we rely on a third-party service, then we can't control it to return us the expected value. In this way, we may not be able to test many error scenarios.

The basic logic of the Mock framework is very simple. Creating a mock object and setting its behavior is mainly to give what kind of feedback when calling with what parameters. Although the logic of the Mock framework itself is very simple, it has gone through a long period of development in the early stage. Different Mock frameworks give different answers to what can be Mocked and how to perform Mocking.

Today's discussion is based on the Mockito framework, which is also the most commonly used Mock framework in the Java community.

To learn the Mock framework, you must master its two core points: ****Setting the mock object and verifying the behavior of the object .

Set the Mock object

To set up a mock object, first create a mock object. In actual combat, we have seen it.

TodoItemRepository repository = mock(TodoItemRepository.class);

The next step is to set its behavior. The following are two examples taken from actual combat.

when(repository.findAll()).thenReturn(of(new TodoItem("foo")));
when(repository.save(any())).then(returnsFirstArg());

The API of a good library should be highly expressive, like the two preceding codes, even if I don't explain it, you can know what it does by looking at the statement itself.

The core of the setting of the simulation object is two points: what is the parameter and what is the corresponding processing.

Parameter setting is actually a process of parameter matching. The core question to be answered is to judge whether the given actual parameters meet the conditions set here. As in the above code, the wording of save means that any parameter is fine, and we can also set it to a specific value, such as the following.

when(repository.findByIndex(1)).thenReturn(new TodoItem("foo"));

In fact, it is also a process of parameter matching, but some omissions are made here, and the complete writing method should be as follows.

when(repository.findByIndex(eq(1))).thenReturn(new TodoItem("foo"));

If you have a more complex parameter matching process, you can even implement a matching process yourself. But I strongly recommend you not to do this, because the test should be simple. In general, the two usages of equality and arbitrary parameters are sufficient in most cases.

After setting the parameters, the next step is the corresponding processing. Being able to set corresponding processing is the key to embodying the controllability of simulated objects. In the previous example, we have seen how to set the corresponding return value, and we can also throw exceptions to simulate exception scenarios.

when(repository.save(any())).thenThrow(IllegalArgumentException.class);

Similar to setting parameters, the corresponding processing can also be written very complicated, but I also recommend you not to do this, the reason is the same, the test should be simple. Knowing how to set the return value and how to throw an exception is enough for most cases.

Check Object Behavior

Another important behavior of the mock object is to verify the behavior of the object, that is, to know whether a method is called as expected. For example, we can expect the save function to be called during execution.

verify(repository).save(any());

This just verifies that the save method was called, we can also verify how many times this method was called.

verify(repository, atLeast(3)).save(any());

Similarly, verification also has many parameters that can be set, but I also don't recommend you to use it too complicated, even verify itself I suggest you not use too much .

The use of verify will give people a sense of security, so it will make people have a tendency to use more, but this is an illusion. When I talked about the test framework, I said that verify is actually a kind of assertion. Assertion means that this is the behavior that a function should have, and it is a behavioral contract.

Once verify is set, it actually constrains the implementation of the function. But the object of the verify constraint is the underlying component, which is an implementation detail. In other words, the result of excessive use of verify is to stifle the implementation details of a function.

Excessive use of verify, when writing code, you will have a sense of accomplishment. However, when it comes to code modification, the whole person is not good. Because the implementation details are locked by verify , once the code is modified, these verify can easily cause the test to fail.

Tests should test interface behavior, not internal implementation . Therefore, although verify is good, it is recommended to use it sparingly. If there are some scenarios where there is nothing to assert without verify, then verify should still be used.

According to the test mode, the behavior of setting the Mock object should be regarded as Stub, and the method of verifying the behavior of the object is Mock. According to the pattern, we should use Stub frequently and Mock less.

How to write unit tests

I’m not a great programmer; I’m just a good programmer with great habits.

I'm not a great programmer, just a good programmer with good habits.

—— Kent Beck

Many teams write less unit tests due to various reasons (such as poor design). But in order to improve code quality and locate problems more accurately, we should write more unit tests.

Unit tests are best written together with the implementation code to reduce the pain of subsequent supplementary testing . If you want to write a good test, the key is to do a good job of task decomposition, otherwise, in the face of a huge demand, no one knows how to write a unit test for it.

The process of writing unit tests is actually a task development process . The completion of a task code is not only to write the implementation code, but also to pass the corresponding test . Generally speaking, the task development needs to design the corresponding interface first, determine its behavior, then design the corresponding test cases according to this interface, and finally, instantiate these use cases into specific unit tests one by one.

A common problem with unit testing is that as soon as the code is refactored, the unit test crashes. This is largely due to the tight reliance of tests on implementation details. In general, unit tests are best designed for interface behavior , since this is a broader requirement. In fact, many details in the test can also be considered to be set wider, such as the setting of the simulated object, the setting of the simulated server, and so on.

test coverage

Test coverage is a metric that refers to the proportion of code that is executed when a test suite is run. One of its main functions is to tell us how much code is tested. In fact, more strictly speaking, test coverage should be called code coverage, but in most cases it is used in testing scenarios, so in many people's discussions, no strict distinction is made.

Since test coverage is a measurement indicator, we need to know what specific indicators are there. Common test coverage indicators are as follows:

  • Function coverage: How many functions defined in the code are called;

  • Statement coverage: How many statements in the code are executed;

  • Branch coverage (Branches coverage): How many branches in the control structure are executed (such as the condition in the if statement);

  • Condition coverage (Condition coverage): Whether the subexpression of each Boolean expression has been checked for different cases of true and false;

  • Line coverage: How many lines of code are tested.

Taking the function coverage rate as an example, if we define 100 functions in the code and only execute 80 after running the test, then its function coverage rate is 80/100=0.8, which is 80%.

These indicators basically know what is going on at a glance. The only thing that is a little more complicated is the condition coverage, because it tests all the true and false values ​​of each subexpression in a Boolean expression. Let’s see Take a look at the code below.

if ((a || b) && c) {
  ...
}

It is such a seemingly simple situation, because it involves three subexpressions of a, b, and c, and the true and false values ​​of each subexpression must be tested, so there are 8 situations.

picture

05d61b4eedb1d0fe5d1a04e6e4bf1fc4-1662966759_copy

In such a case where the condition is relatively simple, the condition coverage is actually very complicated. If the conditions are further increased, the complexity will be further increased, and it is not easy to fully cover the conditions in the test. This also gives us a coding hint: minimize conditions as much as possible. In fact, in real projects, many conditions are unnecessarily complicated, and some complex conditions can be split by returning early.

JaCoCo: A Java Test Coverage Tool

Next, I will take Jacoco as an example to talk about how to actually use a test coverage tool.

JaCoCo is a test coverage tool commonly used in the Java community. The name is the abbreviation of Java Code Coverage at first glance. The team that developed it originally developed an Eclipse plug-in called EclEmma, ​​which itself is used for test coverage. However, later the team found that although there are many test coverage implementations in the open source community, most of them are bound to specific tools, so they decided to start the JaCoCo project as an independent implementation that is not bound to specific tools. , making it a standard technique in the JVM environment.

We already know that there are many different indicators for test coverage. To learn a specific test coverage tool, the main thing is to make a corresponding indicator and know how to set the corresponding indicator.

In JaCoCo, the concept corresponding to the index is counter. Which indicators we want to use in the coverage, that is, which different counters to specify.

Each counter provides different configurations, such as the number of coverage (COVEREDCOUNT), the number of non-coverage (MISSEDCOUNT), etc., but we only care about one thing: coverage (COVEREDRATIO).

With the counter and the configuration selected, the next thing to determine is the range of values, that is, the maximum and minimum values. For example, what we focus on here is what the coverage value should be, and generally it is the minimum value (minimum) for configuring it.

Coverage is a ratio, so its value ranges from 0 to 1. We can configure it according to the needs of our own projects. According to the above introduction, if we require the line coverage to reach 80%, we can configure it like this.

counter: "LINE", value: "COVEREDRATIO", minimum: "0.8"

Ok, you now have a basic understanding of JaCoCo. But usually in the project, we seldom use it directly, but combine it with the automation process of our project.

Using test coverage in your project

This is the value of automated inspection. Under normal circumstances, as long as you do a good job, it will work silently below and will not affect you. Once you forget something due to some negligence, it will jump out to remind you.

Whether it is Ant, Maven, or Gradle, the mainstream automation tools in the Java community provide support for JaCoCo, and we can configure them according to the tools we choose. In most cases, configure once and the whole team can use it.

Maven jacoco configuration

   <plugin>
    <groupId>org.jacoco</groupId>
    <artifactId>jacoco-maven-plugin</artifactId>
    <version>${jacoco.version}</version>
    <executions>
     <execution>
      <id>default-prepare-agent</id>
      <goals>
       <goal>prepare-agent</goal>
      </goals>
     </execution>
     <execution>
      <id>default-report</id>
      <phase>test</phase>
      <goals>
       <goal>report</goal>
      </goals>
     </execution>
     <!--                    <execution>-->
     <!--                        <id>default-check</id>-->
     <!--                        <goals>-->
     <!--                            <goal>check</goal>-->
     <!--                        </goals>-->
     <!--                        <configuration>-->
     <!--                            <rules>-->
     <!--                                <rule>-->
     <!--                                    <element>BUNDLE</element>-->
     <!--                                    <limits>-->
     <!--                                        <limit>-->
     <!--                                            <counter>INSTRUCTION</counter>-->
     <!--                                            <value>COVEREDRATIO</value>-->
     <!--                                            <minimum>0.8</minimum>-->
     <!--                                        </limit>-->
     <!--                                        <limit>-->
     <!--                                            <counter>BRANCH</counter>-->
     <!--                                            <value>COVEREDRATIO</value>-->
     <!--                                            <minimum>0.8</minimum>-->
     <!--                                        </limit>-->
     <!--                                        <limit>-->
     <!--                                            <counter>COMPLEXITY</counter>-->
     <!--                                            <value>COVEREDRATIO</value>-->
     <!--                                            <minimum>0.8</minimum>-->
     <!--                                        </limit>-->
     <!--                                        <limit>-->
     <!--                                            <counter>LINE</counter>-->
     <!--                                            <value>COVEREDRATIO</value>-->
     <!--                                            <minimum>0.8</minimum>-->
     <!--                                        </limit>-->
     <!--                                        <limit>-->
     <!--                                            <counter>METHOD</counter>-->
     <!--                                            <value>COVEREDRATIO</value>-->
     <!--                                            <minimum>0.8</minimum>-->
     <!--                                        </limit>-->
     <!--                                    </limits>-->
     <!--                                </rule>-->
     <!--                            </rules>-->
     <!--                        </configuration>-->
     <!--                    </execution>-->
    </executions>
   </plugin>

The key point here is to link the test coverage with the submission process. In actual combat, we need to run the inspection process before submitting, and the test coverage inspection is in this process. In this way, it is guaranteed that it is not an independent existence, and it not only plays a role in our development process, but also plays a role in the process of continuous integration.

In daily development, what we really deal with is when the test coverage fails. For example, in our actual combat, when running a script to check the code, if the test coverage is not enough, we will get the following prompt .

Rule violated for package com.github.dreamhead.todo.cli.file: lines covered ratio is 0.9, but expected minimum is 1.0

Which errors will be reported here depends on how many counters we have configured. According to my usual habits, I will configure all counters, so that more problems can be found.

However, this prompt just tells us that the test coverage is not enough, but what is not enough, we need to check the test coverage report. Generally speaking, the test coverage report is configured when we integrate with the tool. JaCoCo can provide several report types: XML, CSV, HTML, etc. According to general usage habits, I would prefer to use HTML reports, so that they can be opened and read directly with a browser. You can also configure a different format if you have tools that require reports in other formats.

The location of the generated report is also configurable. I configure it in the directory buildDir/reports/jacoco in the actual project. Here, $buildDir refers to the directory of each module build product. Generally speaking, it is build Table of contents. So, every time I see a build failure due to test coverage, I can open the index.html file in this directory, and it will give you an overview of all the test coverage for this module.

picture

img

In the actual combat project, the coverage requirement of our configuration is 100%, so we can easily find out where the uncovered place is, that is, the place with red. Then we can track all the way in, find the specific class, find the specific method, and finally locate the specific statement. The following is the problem we have located in actual combat.

picture

img

After finding the specific insufficient test coverage, the next step is to find ways to increase the test rate. Generally, these scenarios can be covered in simple cases by adding or adjusting a few tests. But there are also some that are not so easy to cover. For example, in actual combat, we see IOException thrown in Jackson API.

However, how to solve this problem specifically, for different students, there will be their own solutions. The really controversial part of this place is why the test coverage is set to 100%.

In real projects, many people who don't want to write tests hope that the lower the number, the better, but in fact we also know very well that it doesn't make any sense to set this number too low.

Integration Testing

Compared with unit testing, which only focuses on unit behavior, integration testing focuses on the performance of multiple components working together. One is the integration between codes, and the other is the integration of codes and external components.

For the integration between codes, on the one hand, we must consider how the various units we write work together;

On the other hand, in the case of using various frameworks, integration with the framework is considered. If we have unit tests, this integration is mainly concerned with the smoothness of the link, so generally speaking, we only need to assemble related codes together for testing along an execution path.

If a framework is involved, it is best to be able to integrate the framework together. A better-designed framework has better support for testing (such as Spring Boot), which allows us to test easily.

For the integration of external components, the difficulty lies in how to control the state of the external components. The database has a relatively mature solution in this regard: use a separate database and roll back after the test is over.

But most systems do not have such a good solution, especially third-party services. At this time, we have to see if there are suitable alternatives. For most REST APIs, we can use a mock server to mock the service.

Some codes are not easy to cover in automation scenarios due to infrastructure problems, which is why we emphasize that the code combined with the framework must be thin, so that the impact of this code is as small as possible. This is also reducing the workload to cover with upper level tests.

How to unit test in Spring project

However, before the emergence of Spring Boot, it was precisely because of the inability to get rid of the packaging and deployment model that it was still difficult to develop based on this road. It can be said that the problem has not been fundamentally changed. But Spring's lightweight development concept is the driving force behind it all the way forward. Since the web server could not be abandoned at that time, Spring simply chose another way: start with test support.

So Spring provides a way to test, allowing us to fully verify the code we write locally before final packaging. You've seen how to use Spring for testing in the hands-on session. Simply put, it is to use unit tests to build a stable business core, and use the infrastructure provided by Spring for integration tests.

Strictly speaking, building a stable business core does not actually depend on Spring, but Spring provides an infrastructure for assembling components together, that is, a Dependency Injection (DI) container. Usually we use DI containers to complete our work, and it is precisely because DI containers are easy to use that it often leads to  misuse of DI containers, which hinders testing.

Therefore, the key to unit testing in a Spring project is to ensure that the code can be combined, that is, through dependency injection . You might say that we all use Spring, so the code must be combined. This is not necessarily true. Some wrong practices will cause damage to dependency injection, which in turn will cause difficulties in unit testing.

Does not use field-based injection

A typical mistake is field-based injection, such as the following. When using set to inject Mock properties, reflection must be used

@Service
public class TodoItemService {
  @Autowired
  private TodoItemRepository repository;

}

@Autowired is a very useful feature, it will tell Spring to automatically inject the corresponding components for us. Adding Autowired to a field is an easy-to-write code, but it is very unfriendly to unit testing, because you need to set the value of this field cumbersomely, such as through reflection.

What if you don't use field-based injection? In fact, it is very simple, just provide a constructor, put @Autowired on the constructor, like the following.

@Service
public class TodoItemService {
  private final TodoItemRepository repository;

  @Autowired
  public TodoItemService(final TodoItemRepository repository) {
    this.repository = repository;
  }
  ...
}

In this way, when writing tests, we only need to test them like ordinary objects . If you can’t remember the specific method, you can review the actual combat link.

Generally, we can use the shortcut keys of the IDE to generate this kind of constructor, so this code is not a heavy burden for us. If you still dislike the redundancy of this kind of code, you can also use the Annotation of Lombok (Lombok is a library that helps us generate code) to simplify the code, as shown below.

@Service
@RequiredArgsConstructor
public class TodoItemService {
  private final TodoItemRepository repository;
  private @NotNull Bean1 bean1;
  ...
}

Does not depend on ApplicationContext

Another typical mistake in using Spring is to obtain dependent objects through ApplicationContext, such as the following.

@Service
public class TodoItemService {
  @Autowired
  private ApplicationContext context;
  
  private TodoItemRepository repository; 
  
  public TodoItemService() {
    this.repository = context.getBean(TodoItemRepository.class);
  }
  ...
}

It is a completely wrong practice to have ApplicationContext appear in the core business code. On the one hand, it breaks the original design of the DI container, on the other hand, it also makes the core business code depend on the third-party code (that is, ApplicationContext).

Let's look at it from a design point of view. The emergence of ApplicationContext makes it necessary to introduce ApplicationContext when we test this code. In order to get the corresponding components in the code, you need to add the corresponding components to the ApplicationContext in the test, which will complicate an originally simple test.

You see, a normal test is so simple, but because of the introduction of Spring, many people will do it wrong. The biggest advantage of Spring is that it does not depend on Spring at the code level, but the wrong approach is to rely deeply on Spring.

How to conduct integration testing in Spring projects

database test

Today, the database has almost become the standard configuration of all commercial projects, so Spring also provides good support for database testing. We have said before that a good test must be repeatable. Putting this sentence on the database is to ensure that the database before the test is the same as the database after the test. How to do this?

test configuration

There are usually two methods, one is to use an embedded memory database, that is, after the test is executed, the data in the memory is lost at one time. Another way is to use a real database. In order to ensure that the database is consistent before and after the test, we will use the transaction rollback method and not actually submit the data into the database.

A key point of our testing is that we cannot modify the code at will. Remember, the code cannot be modified for the needs of the test. If you do change, maybe it's the design that should be changed, not just the code.

While the code cannot be modified, we can provide different configurations. As long as we provide different database connection information to the application, it will connect to different databases. Spring gives us an opportunity to provide different configurations, as long as we declare a different property configuration in the test, the following is an example.

@ExtendWith(SpringExtension.class)
@DataJpaTest
@AutoConfigureTestDatabase(replace = AutoConfigureTestDatabase.Replace.NONE)
@TestPropertySource("classpath:test.properties")
public class TodoItemRepositoryTest {
  ...
}

In this code, we provide a test configuration, which is a configuration given by @TestPropertySource. This is to replace our default configuration (that is, our real database) with the configuration in the test.properties file on the classpath.

Embedded in-memory database

As we said earlier, we need to ensure the repeatability of the database in two ways: embedded memory database and transaction rollback. To use an embedded in-memory database, we need to provide an embedded in-memory database configuration. In the Java world, common embedded memory databases include H2, HSQLDB, Apache's Derby, and so on. We just need to configure a test dependency, take H2 as an example, as follows.

testImplementation "com.h2database:h2:$h2Version"

Then, provide a corresponding configuration, like the following.

jdbc.driverClassName=org.h2.Driver
jdbc.url=jdbc:h2:mem:todo;DB_CLOSE_DELAY=-1
hibernate.dialect=org.hibernate.dialect.H2Dialect
hibernate.hbm2ddl.auto=create

With any luck, your tests should run without a hitch. Yes, with any luck.

The reason why software development is so serious and serious is attributed to luck, so we have to talk about the use of embedded memory databases.

Therefore, the technology of embedded memory database looks beautiful, but I don't use it much in actual projects, and I use transaction rollback more.

transaction rollback

In the way of transaction rollback, our configuration is almost the same as the standard application configuration. The following is the configuration we use in actual combat.

spring.datasource.url=jdbc:mysql://localhost:3306/todo_test?useUnicode=true&characterEncoding=utf-8&useSSL=false&allowPublicKeyRetrieval=true
spring.datasource.username=todo
spring.datasource.password=geektime
spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver

Generally speaking, in order to prevent data conflicts between the testing process and the development process, we will create two different databases. In MySQL, these are two SQL statements.

create database todo_dev;
create database todo_test;

In this way, one is used for manual testing, and the other is used for automated testing. You can see the difference between the two from the database suffix name. By the way, the general popularity of this practice in the industry stems from Ruby on Rails (a Ruby web development framework), which brought great disruption to the entire industry in software development practices.

With this approach, our code faces the same database engine, so we don't have to worry about SQL incompatibilities.

The transaction rollback we are talking about is reflected in @DataJpaTest, which makes the database rollback the default configuration, so we can get this capability without doing anything.

As with most tests, when testing integration with a database, we also need to do some preparation. What needs to be prepared is often some data, which is inserted into the database in advance. We can use the infrastructure (TestEntityManager) prepared by Spring to complete this work in the database. Here is an example.

@ExtendWith(SpringExtension.class)
@DataJpaTest
public class ExampleRepositoryTests {
  @Autowired
  private TestEntityManager entityManager;

  @Test
  public void should_work() throws Exception {
    this.entityManager.persist(new User("sboot", "1234"));
    ...
  }
}

If you are not using JPA but other data access methods, Spring also provides us with @JdbcTest, which is equivalent to a more basic configuration, because it can work well as long as there is a DataSource, which is suitable for absolutely most of the test cases. Correspondingly, the data work is also more direct, using SQL, the following is an example.

@JdbcTest
@Sql({"test-data.sql"})
class EmployeeDAOIntegrationTest {
  @Autowired
  private DataSource dataSource;
  
  ...
}

Web interface testing

In addition to the database, another thing that has almost become standard today is the Web. Spring also provides very good support for Web testing.

If you work according to the way I work in actual combat, you will find that at the step of writing the web interface, we have basically completed almost all the work, and only need to give the outside world an interface to connect it to our system. In the previous actual combat, we used the overall integration method to test the system. The key point here is @SpringBootTest, which connects all components.

@SpringBootTest
@AutoConfigureMockMvc
@Transactional
public class TodoItemResourceTest {
  ...
}

When talking about integration testing, I once said that there are two types of integration testing, one is a test that integrates all codes, and the other is the integration of external components. From the code point of view, the latter test is only for a unit test, so it has the characteristics of both unit test and integration test. In fact, there is also an integrated method similar to unit testing for testing web interfaces, which uses @WebMvcTest.

@WebMvcTest(TodoItemResource.class)
public class TodoItemResourceTest {
  ...
}

As you can see in this code, here we specify the component TodoItemResource to test. In this test, it won't integrate all the components, only the parts related to TodoItemResource, but the whole web process is complete.

If it is regarded as a unit test, the code behind the service layer is external, and we can use mock objects to control it within a controllable range, and at this time MockBean starts to play a role.

@WebMvcTest(TodoItemResource.class)
public class TodoItemResourceTest {
  @MockBean
  private TodoItemService service;
  
  @Test
  public void should_add_item() throws Exception {
    when(service.addTodoItem(TodoParameter.of("foo"))).thenReturn(new TodoItem("foo"));
    ...
  }
}

Here, the TodoItemService mock object marked by @MockBean will participate in the process of component assembly and become a part of TodoItemResource, and we can set its behavior. If the web interface has a more complex interaction with the service layer, then this approach can handle it well. Of course, as we've been saying, I don't recommend overcomplicating things here.

@WebMvcTest, which is biased towards unit testing, executes faster than @SpringBootTest, which integrates all components. So if the amount of testing increases, using @WebMvcTest will have certain advantages.

There is one more key point in understanding web interface testing. As I said in the previous content, Spring got rid of most of its dependence on the application server back then, but the Web has not gotten rid of it. Therefore, how to test better without relying on the Web server is the problem facing Spring. The answer is that  Spring provides a simulated Web environment.

Specific to our test, it is the role played by the MockMvc object. Let's review its usage with the following code.

@SpringBootTest
@AutoConfigureMockMvc
@Transactional
public class TodoItemResourceTest {
    @Autowired
    private MockMvc mockMvc;
    ...

    @Test
    public void should_add_item() throws Exception {
        String todoItem = "{ " +
                "\"content\": \"foo\"" +
                "}";
        mockMvc.perform(MockMvcRequestBuilders.post("/todo-items")
                        .contentType(MediaType.APPLICATION_JSON)
                        .content(todoItem))
                .andExpect(status().isCreated());
        assertThat(repository.findAll()).anyMatch(item -> item.getContent().equals("foo"));
    }
}

The key here is @AutoConfigureMockMvc, which configures MockMvc for us, and the rest is that we use this configured environment to access.

From an implementation point of view, it is the simulated Web environment. The so-called simulated environment is because it does not start a real web server at all, but directly calls our code, omitting the process of requesting a trip on the network. But the main processing after the request enters the server is there, so the corresponding processing is there (whether it is the processing of various Filters, or the conversion from the request body to the request object). Now you should understand that MockMvc is an important part of Spring's lightweight development.

Spring has made great efforts to support lightweight development, so we have verified most of the content before integrating the entire system. What I introduce here is only the most typical usage. Spring's test is definitely a treasure worth digging. You can read its documentation to discover more interesting usages.

Now we have a basic understanding of how to do a good job in unit testing and integration testing in real projects, but in actual projects, how should different types of tests be matched? This is what we will discuss in the next lecture.

Various test ratios

Features of the test

picture

6ae19b91c63bb20ae0b16d5f0db3d411-1662966760_copy

Ok, so far, you have an understanding of common test characteristics. Next, let's take a look at different test matching models.

test matching model

The so-called different test ratios are actually what kind of tests to write more. And deciding what kind of test to write more is mainly due to the different starting points of different people. Some people think that a test should cover as wide a range as possible, so more system tests should be written. Some people think that tests should consider speed and cost, so more unit tests should be written.

Because there are different starting points, there are two typical test ratio models in the industry, one is the ice cream cone model and the other is the test pyramid model .

Let's first look at the ice cream cone model, as shown in the figure below.

picture

img

In this figure, the unit test is at the bottom, indicating that it is the bottom layer; then the level gradually rises, and the system test, that is, the end-to-end test on the figure is the high-level test, at the top. All automated tests form the cone part, while the outer ice cream part is manual testing.

The width of each layer here indicates the number of tests. It is not difficult to see from the figure that it expects the test ratio: a small amount of unit tests and a large number of system tests.

The starting point of the ice cream cone is to consider the coverage of a single test, as long as a few system tests are enough to cover most of the system. Of course, for those scenarios that cannot be covered by system tests, low-level test cooperation is required, such as integration tests and unit tests. In the ice cream cone model, the main force is the high-level test, and the low-level test is only a supplement to the high-level test.

After understanding the ice cream cone model, let's look at the test pyramid. The picture below shows the test pyramid.

picture

img

In terms of expression, the test pyramid and the ice cream cone model are consistent. The lower part represents the low-level test, and the higher the test level is, the higher the test level is, and the width of each layer indicates the number of tests.

Martin Fowler's Testing Pyramid article article. From the overall shape of the diagram, it is not difficult to see that the test pyramid is the opposite of the ice cream cone. Its focus is to write more unit tests, while the number of tests in the upper layer decreases layer by layer.

The starting point of the test pyramid is that low-level tests have low cost, fast speed, and wide overall coverage, so more should be written . Because low-level tests cover almost all situations, high-level tests can only cover some large areas to ensure that there is no problem with the cooperation between different components. In this model, the main force is the unit test, and the high-level test is used as a supplement.

OK, now that we understand the test-matching model, the next question we need to answer is how to use these two models .

From the perspective of industry best practice, the test pyramid is already the best practice in the industry . The test pyramid is based on unit testing. Because of its low cost and fast speed, unit testing allows us to get feedback quickly during the development process. The test pyramid is also easier to stick to for a team that wants to write tests.

In fact, what we use in the actual combat is the test pyramid model, which is mainly based on unit testing, with a small amount of integration testing or system testing. Therefore, if you are going to start a new project, it is best to use the test pyramid model, and the specific method we have seen in the actual combat link is to write tests layer by layer. Every time a function is completed, the code and tests are always written synchronously, and the code is always verified, so that we can move forward steadily.

Since the test pyramid has become an industry best practice, why do we need to understand the ice cream cone model? Because not all items are new items.

For various historical reasons, many legacy projects are not tested. After the project developed for a period of time, the team began to pay attention to product quality, so everyone began to supplement the test.

In this case, supplementary testing hopes to quickly establish a safety net, which must start quickly with system testing. As long as some high-level tests are written, most of the functions of the system can be covered, which belongs to the practice of "low investment and quick results". This is also an important reason why many people like the ice cream cone model.

However, we must know that in the case of supplementary testing, it is no problem to do so. If we take it as the norm for development, then something is wrong. This is like the relationship between medical treatment and fitness. Although going to the hospital can quickly solve certain problems in a short period of time, you can't go to the hospital just because you have nothing to do. Only by exercising more on a daily basis can you reduce the number of times you go to the hospital.

So, for the ice cream cone model, it is the starting point for writing tests for legacy projects. After having a bottom line of safety net, we still have to move towards the direction of the test pyramid, with unit testing as the basis of the whole . Newly written code should organize tests according to the test pyramid, which is a sustainable direction. Specifically how to write tests on legacy systems, this is the topic we will discuss in the next lecture.

Best Practices

Junit+Mockito

Junit Tutorial: https://www.baeldung.com/junit

Mockito:https://www.baeldung.com/mockito-series

Groovy+Spock

Groovy tutorial: https://www.baeldung.com/groovy-language

Spock tutorial: https://www.baeldung.com/groovy-spock https://zhuanlan.zhihu.com/p/399510995

Spock-Spring Tutorial: https://www.baeldung.com/spring-spock-testing

SonarQube

SonarQube is an open source code quality management platform that helps development teams monitor and manage code quality to improve software quality. Here are some of the things SonarQube is able to do:

  1. Code static analysis: SonarQube can statically analyze the code to find potential defects, loopholes, duplicate code and other issues in the code. Static analysis is the analysis of code at compile time without actually running the code.

  2. Code quality assessment: SonarQube can evaluate the quality of code according to a set of criteria. These criteria include code complexity, maintainability, readability, test coverage, etc. The results of the assessment can help the development team understand the quality of the code and take appropriate measures to improve it.

  3. Code quality tracking: SonarQube can help the development team track changes in code quality so that code quality problems can be discovered and resolved in a timely manner. The development team can use SonarQube to monitor the quality of the code base, and find and solve problems in time when the code changes.

  4. Continuous integration: SonarQube can be integrated with common continuous integration tools (such as Jenkins, Travis CI, etc.), making code analysis and quality inspection a part of continuous integration. This will help the team find and fix issues early in the development cycle.

  5. Code compliance check: SonarQube is able to conduct compliance checks on the code to ensure that the code complies with industry standards and best practices. This helps teams write more consistent and maintainable code.

  6. Defect Management: SonarQube is able to track code defects and assign defects to the corresponding developers. Developers can use SonarQube to manage their defect list and resolve issues in the code.

TDD

The Rhythm of TDD: Red-Green-Refactoring.

picture

img

automated test

https://hellosean1025.github.io/yapi/

pressure test

https://jmeter.apache.org/

https://mimeter.be.mi.com/scenes-info?scene=0&apiProtocol=1

Performance Testing

https://github.com/openjdk/jmh

Single Test Specification Recommendations

  • From ice cream cones to test pyramids

  • The single test process is connected to the CICD pipeline

  • Do not skip local compilation test!

  • When modifying a certain piece of code, supplement the unit test. If it is not easy to write the unit test, refactor first, and the unit test coverage rate reaches more than 80%

  • The single test is written in the corresponding package of the test directory, and the class name is xxxTest

CampaignGateway.class --> CampaignGatewayTest.class

chatgpt test advice

For Java legacy projects, writing unit tests may be more difficult, because these projects may lack good architecture and design, and the code may also be more complex and difficult to test. Here are some suggestions to help you write unit tests in Java legacy projects:

  1. Choose the right unit testing framework: Choose a unit testing framework that suits your project, such as JUnit or TestNG. These frameworks have extensive documentation and community support to help you get started quickly and start writing unit tests.

  2. Identify test points: You need to identify the critical points you want to test to ensure that you are testing the most critical parts. You can select parts with lower coverage for testing, or give priority to testing parts that have a greater impact on system stability and performance.

  3. Add tests incrementally: As legacy projects may lack good architecture and design, it may be difficult to write comprehensive test cases. It is recommended that you start with a small, self-contained module and gradually add tests while improving the code structure and design to make it easier to test.

  4. Using Mocks and Stubs: Legacy code may depend on external systems or libraries, which can make unit testing difficult. In this case, you can use Mock and Stub to mock external dependencies to ensure that your tests are not affected by the external environment.

  5. Continuous integration and automated testing: To ensure your tests are reliable and repeatable, it is recommended that you use a continuous integration tool, such as Jenkins or Travis CI, to automatically run your tests and run them after each code commit.

In conclusion, writing unit tests requires skill and experience, takes some time and effort, but it can help you understand your code better and ensure its quality. When writing unit tests, always follow good coding practices like Single Responsibility Principle and Dependency Inversion Principle to ensure your tests are readable, maintainable and extensible.

  1. Refactor your code to make it easier to test: When you are writing unit tests, you may find that your code is difficult to test, which may be due to the lack of good architecture and design of the code. In this case, you might consider refactoring your code to make it easier to test. Some common refactoring techniques include splitting methods, extracting interfaces, eliminating duplicate code, etc.

  2. Use code coverage tools: Using code coverage tools can help you understand whether your test cases are covering enough code. These tools can help you find areas not covered by your test cases and help you write more comprehensive test cases.

  3. Write maintainable test cases: It is important to write maintainable test cases. This means that your test cases should be easy to understand and modify, and should clearly describe the purpose of your test. You can use annotations and naming conventions to help you write test cases that are easy to maintain.

  4. Learn how to handle edge cases: When writing unit tests, you need to consider various cases, including edge cases and exceptions. These cases can cause your code to crash or behave badly, so you need to ensure that your test cases cover these cases.

  5. Collaborate with team members: Writing unit tests is the responsibility of the entire team. Work with your team members to discuss testing strategies and approaches to ensure your test cases cover critical points and are effective. When writing unit tests, you can also collaborate with other developers and testers to ensure that your test cases are comprehensive and meet the needs of the system.

  6. Use data-driven testing: Data-driven testing is an approach to testing that separates test data from test code so that you can add, remove, and modify test data more easily. You can use data-driven tests to test different aspects of your system to ensure that your code can handle various data situations.

  7. Choose the right assertions: Choosing the right assertions can help you test the correctness of your code. Unit testing frameworks such as JUnit and TestNG provide a variety of assertion methods, including methods for comparing objects, arrays, boolean values, and more. You can choose the appropriate assertion method according to your needs.

  8. Use an appropriate test level: Unit testing is a level of testing that tests a single component or a single method of your code. In addition to unit testing, there are other levels of testing, including integration testing and end-to-end testing. When writing unit tests, you should focus your testing on the lowest level of your code and add additional levels of testing as needed.

  9. Keep test cases independent: Test cases should be independent of each other, which means they should not depend on other test cases or test order. Independent test cases can ensure the reliability of test results and make test cases easy to maintain and modify.

  10. Record and analyze test results: When writing unit tests, you should record test results and analyze them. This can help you identify why a test case is failing and find the parts of the code that need improvement. You can use test reporting tools and analysis tools to help you record and analyze test results.

In conclusion, writing unit tests is very important for maintaining and improving legacy projects. When writing unit tests, you need to choose an appropriate testing framework, identify test points, refactor code, write easy-to-maintain test cases, and more. By writing comprehensive, solid unit tests, you can improve code quality, reduce bugs, and reduce maintenance costs.

Finally: The complete software testing video tutorial below has been organized and uploaded, and friends who need it can get it by themselves [Guaranteed 100% free]

Software Testing Interview Documentation

We must study to find a high-paying job. The following interview questions are the latest interview materials from first-tier Internet companies such as Ali, Tencent, and Byte, and some Byte bosses have given authoritative answers. Finish this set The interview materials believe that everyone can find a satisfactory job.

Guess you like

Origin blog.csdn.net/weixin_50829653/article/details/132714440