Layered automated testing under continuous delivery

Continuous delivery requires the team to be able to build runnable products quickly, frequently, and with high quality in order to respond to rapid business changes and capture ever-changing market demands. Who can respond to customer changes faster and deliver products that satisfy customers has become one of the company's most important core competitiveness. Agile and DevOps have become one of the important practices that make continuous delivery feasible. The DevOps three-step approach provides guidelines for implementing DevOps:

  • Rule 1: Build the "pipeline" from left to right, improve the efficiency of "flow" through the pipeline, and discover problems in the "deployment" process in a timely manner. For example: one-click automatic compilation, testing, packaging, deployment, release (generally manual) and so on through pipeline construction.

  • Rule 2: Build a "quick feedback" mechanism from right to left. By setting quality access control on the pipeline, ensure that each stage reaches the minimum threshold of quality access control before running to the next stage. For example: code static scanning, unit test pass rate, code coverage rate, security scan, integration test pass rate, etc.

  • Rule 3: "Continuous learning" to improve the overall combat effectiveness of the team and reduce "waste" in the process. For example: make the project delivery process smoother by combining agile practices, such as Scrum, Kanban, SAFe, etc.

In most cases, most companies can establish "Rule 1" and build an assembly line to complete the process of compiling, packaging, and deploying software. However, only completing the assembly line from left to right cannot deliver high-quality software. Although the assembly line speeds up the release of software, the quality of the software in the hands of users is uneven, which affects the user experience. In the book "Continuous Delivery", it is advocated to build rapid feedback through "automated testing" to ensure that each stage of each pipeline can meet the quality access control requirements. The implementation of automated testing will cost a lot of money, and it is necessary to balance the input-output ratio of quality in project management. Mike Cohn's book "Succeeding with Agile" (Succeeding with Agile) proposes an automated testing pyramid model to measure the input-output benefits of automated testing, which divides automated testing into three layers "UI, API, and Unit". Most companies' projects also use layered automated testing to control input-output benefits. However, most automated tests only use traditional UI, API, and Unit layers to test software. The actual test cases written are all Based on "end-to-end" or "server-side testing", rather than real layered automation testing, resulting in layered automation that is superficial but not real. Using traditional methods to carry out automated testing brings a lot of repetitive work, lengthens the feedback mechanism of the pipeline, and makes it more difficult to troubleshoot problems. Traditional "layered automated testing" is no longer an "asset" for improving efficiency, but bloated and difficult-to-maintain automated test cases are becoming a "burden" for the team.

background

With the development of DevOps, agile, and microservices, the iteration speed of products is getting faster and faster, and "speed" and "quality" are often regarded as contradictory points. Due to the gradual servitization of business, microservices have grown wildly. Enterprises have also changed from the "pyramid model" to the "rugby model" when implementing traditional layered automated testing. Since UI and API automated tests all need to rely on the normal operation of the system under test, and the execution of automated tests often fails due to test data, environment (front-end, back-end) deployment, use case dependencies, and other reasons. Even if the environment is normal, since the number of test cases extends from the initial core scenario coverage to each branch coverage, the number multiplies geometrically, making it extremely difficult to locate test case execution failures. Based on this background, more in-depth thinking on "layered automated testing" has been derived.

insert image description here

Automated Testing and Defect Productivity

The pyramid model points out that the automated testing strategy should first put more tests into the underlying "unit test", followed by "integration test", and finally "end-to-end test". However, in the current status of microservices, enterprises put more automated testing in the "integration testing (API testing)" link, which is one of the reasons for the explosive growth of interface testing in recent years.


From the perspective of the inverted triangle model of "defect generation rate", more defects are generated at the "unit level", and the cost of repairing defects at the unit level is lower and more efficient. By eliminating defects at the unit level, the "integration/end-to-end test" defects will be significantly reduced, and the troubleshooting efficiency will be higher. Especially when the code module needs to be refactored or redesigned, unit testing is one of the most important means to ensure quality. Solving defects at the unit level is the fastest feedback mechanism, and the execution efficiency is also the highest. Combining "Test Double" and other technologies to isolate dependencies can reduce the dependencies between units, improve the success rate of test case execution, and make Continuous testing becomes possible.

What is layered automated testing

After having the basic concepts of continuous delivery and automated testing, let's sort out the specific methods of layered automation implementation. The traditional "integration test" is more about completing the design of automated test cases by calling the "interface", rather than testing the "interface" itself. Completing the business logic test of the entire backend by calling the "interface" is the same as the "UI" automation test, and both need to rely on the entire backend server being loaded normally, and the environment "initialization" is successful. When implementing UI and API automated testing, the team often has a question? Why does UI automation have guaranteed the correctness of the core business process, but it still needs "repeated" use case development through API testing? From a code coverage perspective, it really doesn't make sense for the same code logic to be covered by different test types. This also leads to thinking about testing strategies, how to allocate resources manually, UI, API, and Unit. The waste in the testing process can be reduced through the division of reasonable testing strategies, which is also one of the manifestations of the third "lean" thought of the DevOps three-step rule. The test of long links brings difficulties in test design, such as: dependencies between use cases, test data dependencies, environment dependencies, etc. Although these problems can be alleviated by "engineering" methods, the efficiency of test case execution , How to accurately find the specific problems of the tested code is still not effectively solved, resulting in extremely low efficiency in troubleshooting "use case problems" or "tested code problems" when the pipeline execution fails.

Here we take a defect management platform as an example to explain the difference between "traditional layered automation" and "layered automation". The front-end technology stack is Vue.js, and the back-end technology stack is SpringBoot. The platform technology architecture is as shown in the figure below:

If you want to perform layered automated testing for the technical architecture in the above figure, the tests can be roughly divided into front-end tests, back-end tests, and integration tests. It should be noted that the backend of actual projects is often a microservice architecture.

Front-end testing

Traditional layered automated testing uses more "external drivers" such as Selenium, Playwright, and Nightwatch to conduct automated testing for the UI layer, such as launching a browser to simulate real user operations. The idea of ​​layered automation is to carry out automated testing by driving the development code through the test code through the "built-in driver". After the front-end engineering is disassembled by layering technology, it can achieve an independent isolation test front-end to improve the success rate of continuous testing. Taking the Vue project as an example, layered automation will layer automated tests into three layers during front-end testing. Function level JavaScript, component level component, end-to-end level End-to-End. The focus of testing at each layer and the technology stack used are different. Function-level tests usually use the unit test framework jest to test the function itself, and use the "test data" to call the function to verify whether the execution logic of the function meets expectations. The call chain inside the function is isolated through jest.mock(); component level The test uses Vue's official Vue Test Utils tool to render the component shallowly (shallowMount) only renders the first-level DOM structure of the component, and its nested sub-components will not be rendered, which makes the rendering more efficient. Unit testing Execution will also be faster. UI (E2E, end-to-end) automated testing can use Mock Server as the front and rear baffles to complete the front-end UI automated testing without starting the back-end server, which greatly improves the stability of UI automated testing. Of course, UI automated testing itself also depends on the normativeness of the code under test. Layered automation makes front-end layered testing responsibilities clearer and reduces external dependencies. For functions, components, end-to-end, and the test ratio invested in them, please refer to the "Test Pyramid Model". After splitting the front-end layered automation test, it is shown in the following figure:

insert image description here
Function test: Verify whether the input and output meet the expected results through a given set of data, and whether the function execution has an impact on other resources. For example: use Jest, a JavaScript unit testing framework, to test functions. Jest can be understood as having the same role as JUnit 5. Jest has more complete functions, such as: built-in function mock, code coverage, snapshot testing and other features. Jest supports front-end mainstream frameworks, such as: TypeScript, Node, React, Angular, Vue, etc. The sample code for unit testing using Jest is as follows:

/**
 * 使用 Mock 函数定义 Mock 函数的实现体
 */
test('Test Mock function implements', () => {
    
    
  let sum = jest.fn().mockImplementation(
    () => {
    
    
      console.log('mockImplementation function be invoked!')
      return 'Miller_' + 30 + '_Male'
    }
  )
  expect(sum(1, 2, 3)).toMatch(/Miller/)
})

Component testing: Components in a Vue project are .vue files one by one. The components contain three major pieces of content, HTML<template>, CSS<style>, and Function<script>. Verify that the functions, data, and events of components are correct by simulating behavior. Instead of calling the function directly to verify the correctness of the function and data like a unit test. Compared with unit tests, component tests need to load more code for testing. Component testing usually also needs to isolate dependencies between components and network requests inside Mock components. In a Vue project, the component can be rendered through Vue.extend, and then the component can be mounted through the constructor to obtain the browser context object. The sample code is as follows:

import Vue from 'vue'
import HelloWorld from '@/views/HelloWorld'
describe('HelloWorld.vue', () => {
    
    
  // Vue 创建工程是自带的测试用例
  it('should render correct contents', () => {
    
    
    // 通过 Vue.extend 渲染 HelloWorld 组件
    const Constructor = Vue.extend(HelloWorld)
    // 获取浏览器上下文对象 vm, 这个 vm 对象包含了 HelloWorld 组件的所有信息
    const vm = new Constructor().$mount()
    expect(vm.$el.querySelector('.hello h1').textContent)
      .toEqual('Welcome to Your Vue.js App')
  })
})

Components can also be tested through the official Vue Test Utils toolkit, which supports the feature of shallow rendering (shallowMount), which can only render the first-level DOM structure of the component, and its nested sub-components will not be rendered, so that The rendering efficiency is higher, and the unit test execution speed will be faster. The sample code for testing components using the VueTestUtils toolkit is as follows:

// 导入Vue Test Utils 工具库
import {
    
    shallowMount} from '@vue/test-utils'
import HelloWorld from '@/views/HelloWorld'
describe('TestSuite_HelloWorld', () => {
    
    
  test('TestCase_CheckChangeData', () => {
    
    
    // Given...测试用例初始化的条件和初始状态
    const wrapper = shallowMount(HelloWorld)
    // When...执行动作
    const message = wrapper.vm.$data.msg
    // Then...断言动作带来的结果
    expect(message).toMatch('Welcome')
    // 修改 data 属性内容
    wrapper.setData({
    
    msg: 'Miller'})
    expect(wrapper.vm.$data.msg).toMatch('Miller')
  })
})

End-to-end testing: Simulate clicks, inputs and other behaviors on the system under test in a natural human way to verify whether the system meets the expected results. Usually this kind of test needs to rely on the stability of the system under test, and needs to use a complete real environment for testing. After Vue is created, the default end-to-end testing tool is Nightwatch. If you are a front-end engineer, using Nightwatch is a good choice. It can package the test code into the project, which is convenient for code collaboration and version management. It is a very good practice. The bottom layer uses selenium as the test-driven framework. The sample code is as follows:

module.exports = {
    
    
  'e2e_Login_Page_CheckElement': function (browser) {
    
    
    // 使用 nightwatch.conf.js 中的默认地址和端口
    const devServer = browser.globals.devServerURL
    browser
      .url(devServer)
      .waitForElementVisible('#app', 5000)
      .assert.containsText('h3', '持续测试-分层自动化')
      .assert.elementCount('h3', 1)
      .end()
  }
}

If you are a test engineer, it is recommended to use Playwright for end-to-end automated testing. It supports some convenient features to quickly and stably build automated test cases, such as: intelligent waiting, recording, running debugging, playback and other features. The sample code is as follows:

@DisplayName(value = "Playwright测试端到端用例集")
public class PlaywrightEndToEndTests {
    
    
    @DisplayName("测试添加缺陷流程")
    @Test
    public void testAddIssueFlow() {
    
    
        // Given.
        try (Playwright playwright = Playwright.create(options)) {
    
    
            BrowserType.LaunchOptions launchOptions = new BrowserType.LaunchOptions().setHeadless(false).setSlowMo(200);
            // When.
            Browser browser = playwright.chromium().launch(launchOptions);
            // 启用追踪功能,这样可以在运行自动化脚本之后查看整个执行的过程
            BrowserContext context = browser.newContext();
            context.tracing().start(new Tracing.StartOptions().setScreenshots(true).setSnapshots(true).setSources(true));
            Page page = context.newPage();
            page.navigate(url);
            // 定位元素
            page.locator("#username").fill("[email protected]");
            page.locator("button.el-button.submit.el-button--primary").click();
            // 跳转到缺陷列表
            page.navigate(url + "/issues/list");
            page.locator("#addIssue").click();
            page.locator("#issueTitle").fill("playwright test by Miller");
            page.locator("#issueHandler").click();
           page.locator("li[id=\"[email protected]\"]").click();
            page.locator("#submit").click();
            // Then
            assertThat(page.content(), Matchers.containsStringIgnoringCase("playwright test by Miller"));
            // 暂停启动调试、录制模式
            page.pause();
            context.tracing().stop(new Tracing.StopOptions().setPath(Paths.get("trace.zip")));
        }
    }
}

Through the above technical solutions, the layered automated testing of the front end can be realized, but one thing to note here is that when performing end-to-end automated testing, it needs to rely on the normal operation of the back-end service, so it is necessary to use MockServer technology to isolate the front-end and back-end.

In order to correctly assert during the end-to-end automated testing process, it is necessary to ensure that the background server is in a normal operating state. Front-end development and testing often lag behind due to the progress of the back-end implementation, and the sharing of data in the team will also affect efficiency. An ideal front-end and back-end separate research and development, the back-end has defined the table structure in the early stage of the project, generated Java Bean objects, and completed the interface definition, providing at least one Mock data return; the front-end can automatically generate documents based on Swagger Perform joint debugging, and with the gradual completion of the back-end interface, the front-end is also synchronized with it to achieve the synchronous realization of business logic. When the front-end needs a new interface, but the back-end has no definition of the interface and entity development, the front-end can develop the front-end static page according to the prototype design, and use Mock technology to access a non-real server interface. By using Mock Server, it is convenient to simulate the data returned by the backend and assist in verifying the logic of the frontend. In this way, independent front-end development and testing can be carried out without relying on the back-end server. Its internal processing flow can refer to the following figure:

insert image description here
MockServer can not only be used as front-end and back-end isolation services, but also can be used for back-end micro-service isolation. It not only supports JSON file data as response data, but also supports code. Calls are isolated. It can also be used as a proxy server to proxy load balancers (such as Nginx). As a proxy service, its status flow is as follows:

backend test

​ The backend can be split into Mapper, Service, Controller and other layer tests according to the architecture of the system under test. The layering strategy is roughly as follows:


Mapper test: The test of the Mapper layer focuses more on the correctness of the SQL statement, and if the dynamic SQL feature is used, then the logical correctness of different branches needs to be verified. The Mapper test can use the mybatis-config-test.xml file to configure the data source, and then use the native SqlSessionFactory to create a session connection with the database to complete CRUD operations. The advantage of this approach is that the Mapper layer test can be completed without relying on SpringBoot. The sample code is as follows:

@DisplayName("使用纯Java代码测试Mapper层接口及Xml")
public class CalculatorHasDBMapperTestByJavaCodeTests {
    
    
    private static SqlSession sqlSession;
    private CalculatorHasDBMapper mapper;
    @BeforeAll
    public static void beforeAll() throws IOException {
    
    
        // 从 mybatis-config-test.xml 读取 MyBatis 配置
        Reader reader = Resources.getResourceAsReader("mybatis-config-test.xml");
        SqlSessionFactory sqlSessionFactory = new SqlSessionFactoryBuilder().build(reader);
        sqlSession = sqlSessionFactory.openSession();
    }
    @AfterAll
    public static void afterAll() {
    
    
        sqlSession.close();
    }
    @BeforeEach
    public void beforeEach() {
    
    
        mapper = sqlSession.getMapper(CalculatorHasDBMapper.class);
    }
    @AfterEach
    public void afterEach() {
    
    
        // 将修改提交到数据库,不提交则数据库不生效,如果仅仅只是测试 Mapper 接口可以不提交
        sqlSession.commit();
    }
@DisplayName("测试Insert语句")
    @Test
    public void testInsert() {
    
    
        Calculator calculator = new Calculator();
        calculator.setFirstNumber(1.0);
        calculator.setSecondNumber(2.0);
        calculator.setResult(3.0);
        Integer insert = mapper.insert(calculator);
        //System.out.println(calculator.getId());
        // 断言影响的记录数据为1
        assertThat(insert, Matchers.is(1));
    }
}

The second way is to test MyBatis through the @MybatisTest annotation officially provided by MyBatis. Using the @MybatisTest annotation will automatically configure SqlSessionFactory and automatically configure an in-memory database. Test cases written using the @MyBatisTest annotation will automatically roll back transactions at the end of the test, and @MyBatisTest will not load other Bean components into the current test case when running.

@MybatisTest
@AutoConfigureTestDatabase(replace = AutoConfigureTestDatabase.Replace.NONE)
@DisplayName("使用@MyBatisTest测试Mapper")
public class CalculatorHasDBMapperTestByMyBatisTestAnnotationTests {
    
    
    @Autowired
    private CalculatorHasDBMapper calculatorHasDBMapper;
    @DisplayName("测试Insert语句")
    @Test
    public void testInsert() {
    
    
        Calculator calculator = new Calculator();
        calculator.setFirstNumber(1.0);
        calculator.setSecondNumber(2.0);
        calculator.setResult(3.0);
        Integer insert = calculatorHasDBMapper.insert(calculator);
        assertThat(insert, Matchers.is(1));
    }
}

The third way is to test Mapper through Spring's transaction function. Using the @Transactional method will not actually put the data into the database. It can be understood that the SqlSession method of pure Java code does not commit before, and the primary key auto-increment ID will be used, but the data will be rolled back. This is transactional. Default behavior.

@Transactional
@SpringBootTest
@DisplayName("使用Spring的事务Transaction隔离数据")
public class CalculatorHasDBMapperTestByTransactionTests {
    
    
    @Autowired
    private CalculatorHasDBMapper calculatorHasDBMapper;
    @DisplayName("测试Insert语句")
    @Test
    public void testInsert() {
    
    
        Calculator calculator = new Calculator();
        calculator.setFirstNumber(1.0);
        calculator.setSecondNumber(2.0);
        calculator.setResult(3.0);
        Integer insert = calculatorHasDBMapper.insert(calculator);
        assertThat(insert, Matchers.is(1));
    }
}

The fourth way is to use H2 Database in-memory database. By switching the data source to application-h2.properties configuration, all test data will point to the in-memory database. It will be used in combination with flyway. Using this method for the syntax of SQL statements There are certain requirements. The sample code is as follows:

@ActiveProfiles("h2")
@SpringBootTest
@DisplayName("使用内存数据库H2")
public class CalculatorHasDBMapperTestByH2Tests {
    
    
    @Autowired
    private CalculatorHasDBMapper calculatorHasDBMapper;
    @DisplayName("测试Select语句")
    @Test
    public void testSelect() {
    
    
        List<Calculator> select = calculatorHasDBMapper.getCalculatorList();
        assertThat(select.size(), Matchers.greaterThanOrEqualTo(0));
        //select.forEach(System.out::println);
    }
}

Controller test: The focus of the Controller layer test should be on the correctness verification of the input data and whether the structure of the returned data is correct. Because the Service called by the Controller layer has been verified in the Service layer. Similarly, the Controller layer is also an ordinary Java class, so we can use Mock to test it as an ordinary class, but this will lose the data message of the HTTP server. The sample code for testing Controller as a normal Java class is as follows:

@DisplayName("测试计算器Controller层代码")
@ExtendWith(MockitoExtension.class)
public class CalculatorControllerTestByMockitoTests {
    
    
    @InjectMocks
    private CalculatorController calculatorController;
    @Mock
    private CalculatorServiceImpl calculatorService;
    // 测试数据
    private Calculator calculator;
    /**
     * 直接调用 Controller 对象的方法,把 Controller 当普通类进行测试,而不是 HTTP 接口
    @DisplayName(value = "测试Restful的POST方法,参数为JSON")
    @Test
    public void testPostMethod() {
        when(calculatorService.add(anyDouble(), anyDouble())).thenReturn(calculator);
        assertAll(
                // 构造POST方法请求参数, 第一种情况:传null
                () -> {
                    String nullObject = calculatorController.testPostMethod(null);
                    assertThat(nullObject, equalToIgnoringCase("Request body can't be empty."));
                };
    }
}

The second way is to use the @WebMvcTest annotation provided in SpringBoot to inject the Controller that needs to be tested separately, simulate the client to send a request to the server through MockMvc, and then annotate the behavior of the specific Service method of Mock through @MockBean, and then achieve a separate test of the Controller Purpose.

package com.github.millergo.controller;
@WebMvcTest(value = {
    
    CalculatorController.class})
@DisplayName("Test CalculatorController by @WebMvcController")
public class CalculatorControllerTestByWebMvcTestTests {
    
    
    @MockBean
    private CalculatorServiceImpl calculatorService;
    @Autowired
    private MockMvc mockMvc;
    private Calculator calculator;
    @DisplayName("Test RESTFul GET Method")
    @Test
    public void testGetMethod() throws Exception {
    
    
        // Stubbing, 隔离 Service
        when(calculatorService.add(anyDouble(), anyDouble())).thenReturn(calculator);
        // MockMvc 拥有 Client 能力,可以对服务器发送请求
        ResultActions resultActions = mockMvc.perform(MockMvcRequestBuilders.get("/calc/add/1/2"));
resultActions.andExpect(MockMvcResultMatchers.status().isOk())
resultActions.andDo(MockMvcResultHandlers.print());
    }
}

For the testing of the Controller layer, it is recommended to use the @WebMvcTest annotation for testing. Of course, other methods such as @SpringBootTest and REST-Assured also support testing the Controller.

Service test: The Service layer test is the most important test layer of the entire backend, which is related to the correctness of the backend system business. In actual projects, most of the backend tests are to verify the correctness of the Service. At present, most teams are conducting SpringBoot unit tests basically using the @SpringBootTest annotation annotation. During the execution of the test case in this way, the Spring IOC container will be started to load all dependent Bean objects and automatically inject all objects. The test case calls the Service layer The disadvantage of doing so is that all the services that need to be relied upon are running normally when the test case is executed. Now there are many RPC calls under the microservices. We think it is calling a method, but actually it may call thousands of tens of thousands of services. And it is necessary to ensure that the databases, Kafka, Redis, etc. that the Service depends on are running normally, otherwise the test cases are likely to fail. Layered automation testing is different. Layered automation isolates Service through Test Double (test double). Mock technology can be used to test a method or even a line of code independently without depending on the method. Internal call link. Since we have tested the Mapper layer separately, we can construct the test data by ourselves when testing the Service layer, and Mock off the dependencies of the Mapper layer to achieve independent testing of the Service. Service layer testing can use Mockito, PowerMock, spring-test and other technical means. For example: the following code uses spring-test combined with Mockito to perform a simple calculator test sample code.

@ExtendWith(MockitoExtension.class)
public class UseMockitoInJunit5ByAnnotation {
    
    
    private CalculatorServiceImpl calculatorService;
    @Mock
    private CalculatorMapper mockCalculatorMapper;
    @Test
    public void testGetCalcResultUseMockito() {
    
    
        this.calculatorService = new CalculatorServiceImpl();
        ReflectionTestUtils.setField(calculatorService, "calculatorMapper", mockCalculatorMapper);
Mockito.when(mockCalculatorMapper.getCalcResultByDesc("desc")).thenReturn(4.0);
        calculatorService.getCalcResult("desc");
        // 验证 Mock 的对象 CalculatorMapper 被调用的次数为1
        Mockito.verify(mockCalculatorMapper, Mockito.times(1)).getCalcResultByDesc("desc");
    }
}

CalculatorServiceImpl needs to be injected into CalculatorMapper, so here we need to use the reflection testing tool of Spring-test to inject the CalculatorMapper from Mock into the CalculatorServiceImpl class. Of course, you can also use Mockito's @InjectMocks annotation to automatically inject a Mock object of CalculatorServiceImpl without using the reflection tool class of the spring-test package to inject objects. The sample code is as follows:

@ExtendWith(MockitoExtension.class)
public class MockitoInjectMocksTests {
    
    
    // 将 @Mock 注解的对象注入到 @InjectMocks 对象中的属性
    @InjectMocks
    private CalculatorServiceImpl calculatorService;
    @Mock
    private CalculatorMapper mockCalculatorMapper;

    @Test
    public void testInjectMocks() {
    
    
       calculatorService.getCalcResult(null);
    }
}

The InjectMocks annotation will try to automatically inject the required Mock object through the constructor. If there is no constructor, it will use setXxx() to try to inject.

Integration Testing

​ The "integration" of integration testing is a relative concept. Generally, we divide testing into several levels, such as unit testing, integration testing, system testing, acceptance testing, etc. Integration testing refers more to testing after the units have been put together. For example: integration between methods, integration between classes, integration between modules, integration between microservices, integration between services and middleware, integration between front-end and back-end, integration between systems, etc. In the above front-end testing, component testing and end-to-end testing can be understood as a kind of integration testing. The use of REST-Assured for the Controller in the back-end test can also be understood as a kind of integration test. In integration testing, more attention is paid to the correctness of the modules after they are combined together. If you use Postman, HttpClient, and REST-Assured directly, it can be considered as a back-end integration test or a server-side test. But if you just want to test the integration test between Controller and Service, then you need to isolate Mapper, which is usually done in debugging instead of automated integration testing, or use H2 Database for back-end integration testing.

insert image description here

Layered Test Architecture

​ By using these technologies in layered automation, "true" layered automation testing can be effectively carried out, but it is also valuable for traditional end-to-end testing and server-side testing, which needs to be chosen according to the project and team . Traditional layered automation and layered automation can also be configured to choose whether to use the test double Test Double switch, but this belongs to the test framework and the test platform level needs to be implemented. For a complete diagram of the layered automated test architecture, please refer to the following:

insert image description here

Test Cases and DevOps Platforms

Continuous delivery requires us to deliver products to customers faster. Using DevOps to build high-quality software is not enough to complete the delivery of products. To deliver products to customers, we need to continue to move left to the demand side, so that the pipeline can be compared with Requirement association, so that the platform can sense the corresponding relationship between requirements and pipelines, and then select the pipeline to be released on the demand side. Similarly, test cases should also need to be associated with pipelines or requirements, use cases, etc., which association method to use It depends on the delivery scenario of the actual enterprise and the design of the DevOps platform.

Traditional interface and UI automation test cases are generally independent projects, and lack of linkage with the company's internal platform. The disadvantage of this is that there is no correlation between the automation test cases and the business, and it is not convenient to locate this requirement when the business changes The test cases involved, and which test cases are affected by the requirement change. In agile testing, if you use Scrum practice, the requirements are generally written in user stories, and there will be corresponding development tasks, test tasks, test cases, defects, and other associations under the user stories. Automated test cases can be associated with requirements or use cases in the form of annotations. Different platforms may implement them in different ways. Some platforms analyze the test cases affected by requirements and sub-requirements by associating requirements with use cases. Based on this mode The automated test case only needs to be associated with the test case, and the status of the test case (manual or automated) can be automatically updated after the automated test is executed. In an ideal agile development process, testing should run in parallel with R&D, so that when R&D changes the development task under the requirement to "to be tested", the execution of the automated test case is triggered to automatically update the status of the test task. There is also another scenario where there is a disconnection between test cases and requirements. For example, testers create test cases by writing mind maps (only online mind maps are discussed here), then automated test cases need to establish a bidirectional relationship. By binding the node ID and requirement ID in the mind map, it is possible to automatically update the use case nodes and the requirement status of the mind map after the automated test case is executed (generally, the requirement status field is not directly updated here, but by writing a record under the requirement Or automatically create an automated test case, the name of the use case is the method name (@DisplayName) of the automated test case. The pseudocode of the test case associated requirements or use cases is as follows:

@DisplayName("测试用例关联DevOps平台测试")
public class IssuesTests extends BasicTestCase {
    
    
    /**
     * {
@link ApiDoc @ApiDoc}当执行成功时自动更新任务ID中的状态
     */
    @ApiDoc(value = {
    
    "project/7/task/10010","project/7/task/10011"})
    @DisplayName("更新需求关联的任务10010和10011状态")
    @Test
    public void testAddIssue() {
    
    
        // 这里完成具体业务的正确性验证代码。
        doSomething...
        // 通过JUnit的TestWatcher监听测试结果,如果断言通过则自动更新任务的状态为“完成”,失败则更新任务状态为“失败”
        assertThat(true, is(true));
    }
}

"@ApiDoc" in the pseudo-code above is a custom annotation, which is used to automatically update the data of the specified task or user story ID in the requirements management platform.

test coverage

Test coverage usually includes two basic dimensions "requirement coverage and code coverage". Requirement coverage usually refers to whether there are corresponding tasks (development, testing, use cases, defects, etc.) associated with the requirements after they are decomposed, and finally all requirements have been verified, proving that the software has been tested. The relationship between requirements and use cases can refer to the following figure:

insert image description here

Code coverage is a method of indirectly measuring software quality by calculating the proportion of code executed during test execution to the total source code. It should be noted that code coverage is usually used to find deficiencies in test design and supplement test cases. , rather than being used to design the only standard for measuring code quality. The Java language can use the JaCoCo tool to collect code coverage, combined with gitdiff, jgit and other tools to obtain incremental coverage. By combining DevOps platform, test cases, code coverage and other technologies, it is possible to effectively build the relationship and scope of influence among requirements, manual use cases, and automation use cases.
insert image description here

Summarize

​ Through a platform similar to "DevOps", we can organically integrate requirements, tasks, defects, use cases, codes (development + testing), pipelines, etc. to achieve integration from the generation of business requirements to release and launch, and truly achieve continuous delivery. quality software. These data are no longer isolated islands of information, and the entire project management process is more digitalized through interconnection. Regarding how to productize this piece, you can refer to "Yunxiao, TAPD, Zen Road" and other products. It is also possible to do continuous delivery on the technical side first, directly use the open source Jenkins to complete the agility on the technical side, and then move left to the business side. The following figure is an example diagram of Jenkins' continuous integration pipeline.

insert image description here
Due to space limitations, the entire layered automated testing system is not explained in detail here, and there are plans to launch a series of articles in the future.

Finally : In order to give back to the die-hard fans, I have compiled a complete software testing video learning tutorial for you. If you need it, you can get it for free【保证100%免费】

Software Testing Interview Documentation

We must study to find a high-paying job. The following interview questions are the latest interview materials from first-tier Internet companies such as Ali, Tencent, and Byte, and some Byte bosses have given authoritative answers. Finish this set The interview materials believe that everyone can find a satisfactory job.

全部资料获取:

insert image description here

Guess you like

Origin blog.csdn.net/m0_53918927/article/details/131563836
Recommended