This time, a thorough introduction to front-end testing, covering unit testing and component testing (2.4w words)

Front-end testing has always been an important topic in front-end engineering, but many people often misunderstand testing, thinking that testing is not only useless but also a waste of time, or that testing should be done by testers and they should focus on development. Therefore, at the beginning of the article, we will first introduce the definition, function and classification of software testing from the perspective of "software testing in the context of software engineering", so that readers can correctly understand testing, clarify their own positioning in the software testing stage, and how to use it in software testing. Responsibilities and tasks to be performed during the testing process.

After understanding the definition and function of software testing, we will start to get started with front-end testing. In this part, I introduced many common basic knowledge of automated testing, such as assertion and simulation, and also introduced the basics of the unit testing framework Jest and the latest Vitest. used and performed a more in-depth comparison.

Finally, the actual part of front-end testing. I demonstrated how to test a small address list application. I first introduced the component mounting libraries Vue Test Utils and Vue Testing Library to be used in component testing, and then focused on testing in component testing. Principles, testing techniques, and some caveats.

This article is long and full of energy. It is recommended to collect it and watch it slowly.

Software Testing in the Context of Software Engineering

What is software testing

What is software testing? To answer this question, we first need to clarify why software testing is required? The answer is simple, it is to ensure the quality of the software. For software, no matter what technology and method is used for development, there will be more or less errors and problems in software products. Adopting advanced development methods and a perfect development process can reduce the introduction of errors, but it is impossible to completely eliminate errors in software, and these errors need to be found out through testing. Therefore, software testing is a key step in software quality assurance.

Regarding the definition of software testing, there have been debates on both sides. The positive point of view is that software testing is the process of using manual or automatic means to run or measure a system, with the purpose of checking whether it meets the specified requirements or clarifying the difference between the expected results and the actual results. This point of view clearly puts forward that software testing is to verify whether the requirements are met.

The opposite point of view is: testing is the process of executing a program or system in order to find errors. Testing is to find defects, not to prove that the program is error-free. If no problems in the program are found, it does not mean that the problem does not exist, but The potential problem in the software has not been found yet. According to this point of view, a successful test must be a test that finds software problems, otherwise the test has no value.

The pros and cons of this point of view are to look at the problem from different angles. On the one hand, the quality is guaranteed through testing and whether the software meets the requirements. place. In specific application scenarios, software testing should strike a balance between the two, or focus on it.

The relationship between software testing and software development

After introducing why software testing is needed and what is software testing, we clarified the definition and function of software testing. In this subsection, we discuss the relationship between software testing and software development, and study the role of software testing in software engineering.

In people's stereotypes, the activity of software testing seems to only happen after the coding is completed, and it is considered as a means of testing products, and it is carried out as the last activity of the software life cycle. In the well-known software waterfall model, software testing is located downstream of the programming phase and upstream of the maintenance phase. Programming comes first and then testing. The location of testing is clearly defined. The tests in the waterfall model will only be executed after the program is completed, emphasizing that the test is only a verification of the program:

However, the waterfall model belongs to the traditional software engineering, which has great limitations, conflicts with the iterative thinking and agile methods of software development, and does not meet the actual needs of today's software engineering. In fact, software testing runs through the entire software life cycle, starting from requirements review and design review, and software testing is involved in software development activities. For example, through reading, discussing and reviewing the requirements definition, not only the problems of the requirements definition can be discovered, but also the design characteristics of the product and the real needs of users can be understood, so as to determine the test objectives, prepare test cases and plan test activities.

Similarly, in the software design phase, by understanding how the system is implemented and what operating environment it is built in, it is possible to measure the testability of the system and check whether the design of the system meets the reliability requirements of the system.

Therefore, software testing and software development are mutually collaborative and work together throughout the software life cycle. When the software project starts, the work of software testing begins. The V-model nicely reflects the relationship between software testing and software development:

As shown in the figure, the left side is the process of software definition and implementation, and the right side is the process of testing the results constructed on the left side, that is, there is a one-to-one relationship between testing and development, and it is confirmed by checking the results of the development work whether it meets the specified requirements.

You may be a little confused about the various test types on the right side of the V model. Tests such as functional tests and acceptance tests belong to the category of software testing.

Classification of software testing

Software testing can be classified from different angles, for example, according to the method of testing, or according to the objectives of testing and the stages of testing. As shown in the figure, it is the three-dimensional space of software testing:

For front-end programmers, we should be familiar with the unit test, integration test and system test. These three levels are actually divided according to the tested object or test stage. The specific content will be introduced in the next few sections.

Functional testing, also known as correctness testing, is used to verify whether each function works normally according to the pre-defined requirements. For example, most of the unit tests written by our front-end programmers are functional tests. Tests for other purposes, such as stress testing, compatibility testing, and security testing, are generally entrusted to professional testers.

Regression testing is a test to ensure that new changes in software (such as adding and modifying code) will not affect the normal use of original functions. For example, the behavior of running test scripts in the CI/CD pipeline after we submit new code to the version control repository is a regression test.

In addition, there are four types of tests that require our special attention:

Static testing and dynamic testing

According to whether the program is running or not, testing can be divided into static testing and dynamic testing.

Static testing includes review of software product requirements and design specifications, review of program code, and static analysis. For example, after we write the code, we usually simply check the written code, and analyze whether its behavior meets expectations by observing the control flow or direction of the program. This kind of static analysis is a static test.

In addition, using TypeScript can statically analyze the code and perform type checking during coding, so as to discover and prompt hidden type errors in the program. This process is also a static test. If you do not use a strongly typed language such as TypeScript, we usually need to use keywords typeofsuch as to avoid such type errors, and create and run corresponding test cases in unit tests to ensure that the code's type judgments conform to expected. So from the perspective of testing, languages ​​or tools such as TypeScript also liberate the hands of programmers to a certain extent, so that we don't have to write complicated and troublesome test cases to judge whether the program type processing is normal, and focus on the program functional testing.

Of course, using linters or formatters like ESlint and Prettier is also a static test to check whether the format and style of the code conform to the team's specifications.

The process of using tools to statically analyze the code and check whether the code meets the requirements belongs to automated testing, and the tools used are called testing tools. Later we will focus on how to use Jest, Vitest and other testing tools or technologies for automated testing and writing test code.

Dynamic testing is to find errors by running the program, and obtain information about system behavior, memory, stack, and test coverage by observing the code running process to determine whether there is a problem in the system, or to determine whether there is a problem in the system, or through the input and output relationship corresponding to the effective test case To analyze the operation of the program under test to find defects. After writing a component, we will let the code run on the browser to observe the rendering effect of the component or the running result to judge whether it meets expectations. This behavior is a dynamic test.

Automated testing and manual testing

Just mentioned automated testing, this section we introduce automated testing in detail.

Software testing is a hard job that requires a lot of time and energy. According to statistics, software testing will take up 40% of the entire development time. However, software testing work has relatively large repetition. We know that before software is released or new code is submitted, multiple rounds of regression testing will be performed, that is to say, a large number of test cases will be repeated many times, but the tests performed at this time are only to verify the submitted functions or The code does not affect already implemented code, so the chances of finding a bug are generally low. Although it is inefficient to perform a large number of regression tests, it is very necessary. So, automated testing was born.

Automated testing is a concept that exists relative to manual testing. The operation process of manually running test cases one by one is replaced by the process of automatic execution by testing tools or systems. Automated testing is an important means to improve test efficiency, coverage and reliability in software testing, and is an integral part of software testing.

Automated testing is a process of transforming human-driven testing behavior into machine execution, that is, simulating manual testing steps, and automatically completing software unit testing, functional testing, load testing, etc. by executing test scripts compiled by programming languages .

For front-end programmers, in addition to automated testing with static testing tools such as TypeScript and ESlint, unit testing tools such as Jest, Vitest, and Mocha and end-to-end testing tools such as Cypress and Playwright can also be used for automated testing.

White box testing and black box testing

Depending on whether the testing methods are aimed at the internal structure of the software system or the external performance of the software system, they are called white-box testing methods and black-box testing methods.

White box testing, also known as logic-driven testing or structured testing, is to know the internal working process of the product, understand its program structure and statements, test the program according to the internal structure of the program, and test the variable status, logical structure, and operation of the program. Paths, etc., to check whether each path in the program can work normally according to the predetermined requirements, and check whether the internal actions or operations of the program meet the design specifications.

Students who have written unit tests may know that we usually run a code coverage test after writing the test code, and judge whether the written tests are sufficient according to the generated code coverage report. The code coverage here is obtained by the ratio of the branch, function and statement of the source code covered by the running test code to the source code. If the code coverage does not meet the requirements, we need to write one or more test cases for the uncovered code to improve the coverage. This testing method can be called white box testing.

In addition, the testing tools such as TypeScript and ESlint just mentioned can also be said to be a kind of white box testing tool.

Black box testing, also known as data-driven testing, treats the program as a black box that cannot be opened during testing, and directly tests the software without considering the internal structure and internal characteristics of the program.

Black box testing does not focus on the internal structure of the software, but focuses on the external user interface of the program, focusing on the input and output of the software, focusing on the needs of users, directly obtaining user experience, and verifying software functions from the perspective of users or playing the role of users.

As front-end programmers, most of the testing methods we use should use black-box testing. The specific reasons and practices can be seen below.

Unit testing, integration testing and system testing

A software system is composed of many units, which may be an object, class or function, or a larger unit, component or module. To ensure the quality of the software system, we must first ensure the quality of the units that make up the system, that is, to carry out unit testing. Through sufficient unit testing, problems in the unit are found and corrected, thus laying the foundation for the quality of the system.

Most of the work of unit testing should be done by developers, but many developers only focus on programming and write out the code, rather than spending time on testing and let testers do the testing. It needs to be clear that if the unit test is not done well, the software will find more and various errors in the integration phase and subsequent testing phase, and a lot of time will be spent tracking those hidden in independent units, Simple mistakes above will lead to an increase in the duration of the entire project and increase software costs.

As a software developer, you must be clear: the earlier errors in the software are discovered, the lower the cost and difficulty of modification and maintenance, and unit testing is the best time to catch these errors early.

Unit testing emphasizes the independence of the tested object, and the independent unit under test will be isolated from other parts of the program to avoid the influence of other units on the unit. For example, isolate the module under test from its parent module and child modules and test it separately. However, if its dependencies are isolated, the tested module may not work properly. At this time, it is necessary to use Mock, which is often used in front-end unit testing, that is, simulation. See below for details.

In software development, we often encounter such a situation: unit testing can confirm that each module can work independently, but after these modules are integrated, some modules may not work properly. If you think about it carefully, you can know that this is mainly because there are problems with the interfaces when the modules are integrated together when they call each other, such as interface parameters do not match, wrong data is passed, and so on. This is where integration testing is required. Integration testing is a test that integrates the units that have passed the test according to the design requirements to check whether there is a problem with the interface between these units.

When performing integration testing, you need to choose an integration mode, that is, what strategy to integrate. Integration testing can basically be summarized into the following two types:

  • Non-incremental test mode: first test each module separately, and then put all the modules together according to the design requirements to form the desired program for testing; * incremental test mode: test the next module to be tested with the one that has already been tested The modules are combined for testing, and the testing is carried out under the extension of the modules one by one, and the scope of the testing is gradually increased. In actual work, the incremental test mode is generally adopted, and specific practices include top-down, bottom-up, and mixed strategies. Of course, the specific situation is analyzed on a case-by-case basis.

After the integration test, the modules developed separately are integrated to form a relatively complete system, and all kinds of problems existing in the interfaces between modules have basically been eliminated, and the system testing stage can be entered at this time.

System testing is to combine the software that has passed the integration test, as a part of the computer system, with system elements such as computer hardware, data, and platforms, and conduct a series of strict and effective tests on the computer system in a real operating environment to discover the software. potential problems to ensure the normal operation of the system.

System testing is divided into functional testing and non-functional testing.

System-level functional testing should not only consider the interaction between modules, but also consider the application environment of the system, and simulate the user to complete the business test from beginning to end, that is, end-to-end, to ensure that the system can complete the pre-designed functions and meet the needs of users. actual business needs.

System non-functional testing is a test for the non-functional characteristics of the system in the actual operating environment or simulated actual operating environment, including load testing, performance testing, security testing, etc.

Test Driven Development

In the agile method, test-driven development (Test Driven Development, TDD), that is, the development method of testing first and coding later. TDD is different from the previous development process of coding first and then testing, but writing test scripts or designing test cases before programming. This emphasis on "test first" mode can make developers have enough confidence in the code they write, and at the same time have the courage to refactor.

The specific implementation process of TDD is that when planning to add a new function, do not rush to write functional code, but think clearly about various specific conditions, usage scenarios, etc., and then make some test cases for the code to be written, Then use some test tools to run this test case, and the result of the run is naturally a failure. At this time, use the error message of the test tool to understand the reason why the code failed the test, and then add the code step by step in a targeted manner, and then run the test again. Continuously modify and add code until the test case passes.

TDD prevents developers from writing code at will as in the past, requiring every line of code written to be valid code. Before that, even if the code is written, the programming work is not over yet, because unit testing has not yet been performed, and errors may still occur after unit testing, which needs to be corrected again. TDD is to preset various application scenarios and prerequisites, to promote developers to think, to write more complete and higher-quality code, and to improve work efficiency.

In addition, TDD can also ensure the independence of testing, so that the design of test cases will not be affected by the implementation thinking, and ensure the objective and comprehensive testing.

For programmers with high abstraction ability, who like to preset various scenarios and think about prerequisites before writing code, TDD is undoubtedly a boon, but if you lack abstraction ability or are eager to implement functions, you don’t have to force it. Just supplement the unit test in time after the function is completed.


All the relevant content and concepts of software testing in the context of software engineering are introduced here. As an important part of software engineering, software testing plays a vital role in software development and runs through the entire life cycle of software. Understanding the definition, function and classification of software testing can enable us as programmers to clarify our position in the software testing phase, understand our responsibilities and tasks in the software testing process. I hope you can understand these contents well.

Now, let us shift our attention from testing in software engineering to testing in front-end development. As a front-end programmer, what tests should we do and how should we test?

Tests for front-end programmers

As a front-end developer, when building a web or other type of application, from the perspective of the tested object, the following three types of tests can be performed:

  • unit test. As mentioned earlier, most of the work of unit testing should be done by developers, and the same is true for front-end programmers. We need to test a single stand-alone function, class or a composite function or hook, isolating it from the rest of the application. Moreover, functional testing should be carried out, focusing on the functional correctness of the unit under test, rather than other tests such as compatibility testing and performance testing. Moreover, due to the particularity of front-end applications, in order to create an environment isolated from the outside world, we often need to simulate a large part of the application environment, such as third-party modules, network requests, etc.; * Component testing. Nowadays, most web applications are developed using frameworks such as Vue and React that advocate component-based development. Therefore, testing the components written should occupy a relatively large proportion in front-end testing. Component testing needs to check whether the component is mounted or rendered normally, whether it can interact normally, and whether the performance meets expectations; * end-to-end (E2E) test. After unit testing and component testing are completed, we also need to conduct end-to-end testing, deploy the entire application to the actual operating environment or run in a simulated operating environment, and conduct end-to-end business of the entire application from the user's perspective Testing to ensure that the application behaves as intended. In addition to testing the real interaction effect and logical behavior of the user interface, the end-to-end test will also initiate a real network request, obtain data from the back-end database, and run it on a real browser to capture snapshots and videos when the application fails. information. End-to-end testing can be said to be a system functional test. Of course, (automated) end-to-end testing does not have to be done, nor does it have to be done on the front end. In the actual development process, an appropriate test plan should be selected based on the actual situation. In addition to the above three functional tests, front-end programmers can also perform performance tests, such as using the browser's LightHouse and Performance functions to detect page rendering and interaction performance. Other tests, such as compatibility tests, may also be performed. Since it is not the focus of this article, it will not be introduced.

Unit testing, component testing, and end-to-end testing of a huge application often require the design of a large number of test cases and the execution of multiple and repeated tests. To greatly shorten the time spent on testing, it is natural to use Automated testing improves testing efficiency by using testing tools and writing test scripts. Fortunately, after so many years of development in the front-end field, many excellent open source testing tools have already appeared in the community. Next, I will introduce how to use testing tools for automated testing, write test scripts, and let you get started with automated testing in an all-round way.

Getting started with front-end automated testing

If you were to test the following function now, what would you do?

function sum(a, b) {return a + b
} 

The first step is naturally to design a test case, such as input 1 and 2, this function will output 3. After designing the test case, of course, let this function run, pass in 1 and 2, and print the return value of the function to see if it is 3. So you can write the following test code:

console.log(sum(1, 2)) 

Then run this code and check if the printed result is 3. In this way, a test code is completed. Of course, this function is too simple, and it can also be tested by static testing. Here is just a convenient example. In addition, this code still needs to manually observe the running results to verify the test results, which does not belong to the category of automated testing.

When we do more similar tests, we can find a rule. Most of the test cases design one or more input data and the corresponding output data. When these input data are passed in, whether the code under test generates or returns These output data are used to judge whether the code under test is running normally. The process of this judgment is called assertion .

Affirmation

The assert module of Node provides an assertion API, such as using the equal method to assert the above sum function, as follows:

assert.equal(sum(1, 2), 3) 

Running this code, if the implementation of the sum function does not meet expectations, the equal method will throw an AssertionError error and print the detailed error reason.

In addition to the assert module provided by Node, many assertion libraries have appeared in the community, providing various assertion styles, the most representative ones being Chai and Jest .

Chai

Chai provides three different assertion styles for users to choose from.

assert

The assert style is similar to Node's assert module, but provides more APIs and can run on browsers:

const assert = require('chai').assert
const foo = 'bar'

assert.typeOf(foo, 'string') // without optional message
assert.typeOf(foo, 'string', 'foo is a string') // with optional message
assert.equal(foo, 'bar', 'foo equal `bar`') 

The assert-style API allows users to pass in an optional string describing the assertion behavior as the last parameter, which will be displayed in the error message when the assertion fails.

BDD

The BDD style provides two types of assertions: expect and should, both of which support the syntax of chain calls so that users can make assertions in a way close to natural language. The usage is as follows:

// expect:
const expect = require('chai').expect
const foo = 'bar'

expect(foo).to.be.a('string')
expect(foo).to.equal('bar')

// should:
const should = require('chai').should()
const foo = 'bar'

foo.should.be.a('string')
foo.should.equal('bar')
foo.should.have.lengthOf(3) 

The difference can be seen by carefully observing the usage of these two types of APIs: when using expect, you only need to wrap the test result into expect()a function to perform chain calls, while when using should syntax, you only need to call should()the method to directly call The principle of chained calls is also obvious: should()after calling the function, the definition of should()the method .

Is

The Jest-style API is similar to Chai's expect syntax, but instead of providing chained calls, it directly calls a method for assertion:

expect(2 + 2).toBe(4)
expect('How time flies').toContain('time')
expect({a: 1}).not.toEqual({b: 2}) 

As shown in the above example, methods like toBe()and toEqualsuch that assert certain aspects of the content to be tested are called matchers. Commonly used matchers are toBe, toEqul, and toContainso on. You can refer to Jest's matcher API documentation to learn more. There are not many matchers, less than 40. I believe you can easily handle them, so I won't go into details here.

Use Jest

Through the introduction of the most basic steps of unit testing, that is, assertion, we have learned about three assertion styles and corresponding APIs. After we have the basic ability to write unit tests, we will formally learn how to use automated testing tools for unit testing. Test, using Jest as an example.

In addition to being an assertion style, Jest is also a testing framework for unit testing, with the ability to run test scripts. It does not need to be configured for common JS projects, it works out of the box, and supports excellent capabilities such as snapshot testing and parallel testing.

Let's try unit testing with Jest. First install Jest:

npm install jest -D 

After the installation is complete, we create a new __tests__directory , and then create a sum.spec.jsfile. By default, when running tests, Jest will automatically search and __tests__run all .js, .jsx, .tsfiles in the directory and all files with .testor .specsuffixes in the root directory, so we don't have to manually set the location of the test file.

sum.spec.jsUnder the file we can enter the following test code:

function sum(a, b) {return a + b
}

describe("sum", () => {test("输入 1 和 2,输出 3", () => {expect(sum(1, 2)).toBe(3)})
}) 

After writing the test code, enter the following command to start Jest to run the test:

npx jest 

After the test has finished running, Jest will output the following to the console to indicate that the test passed:

OK, a super simple unit test is done!

Let's introduce the functions used in the test code in detail:

describe()and for organizing test codetest()

The first is test()the method , which is used to declare a test case (test case, which can be directly called a test, that is, test). When we write unit tests, we basically organize tests in units of test cases. Its first parameter accepts a string to describe the content of the test case. Here we use the format of "input xx, output xx" To describe this test case, this can clearly indicate the intent of this test case.

test()The second parameter of the method is a function that contains the main content of the test case, namely the assertion. A test case can contain multiple assertions, but the asserted content should conform to the intent of the test case.

test()The method also receives an optional timeout parameter, which is used to specify the timeout period of the test, and the default is 5 seconds.

test()The method also has an alias: It(), if you use It()to describe the test case, you can use a more natural language syntax, such as:

It("should return the correct result", () => {expect(sum(1, 2)).toBe(3)expect(sum(2, 4)).toBe(6)expect(sum(10, 100)).toBe(110)
}) 

describe()A method can organize one or more test cases, and combine multiple related tests into a block, which is called a test suite. Using describe()to organize test cases is a recommended way of writing, which can isolate test content from other content and is more conducive to maintenance.

describe()Methods can be nested, for example, like this (example from the official website):

describe('binaryStringToNumber', () => {describe('given an invalid binary string', () => {test('composed of non-numbers throws CustomError', () => {expect(() => binaryStringToNumber('abc')).toThrowError(CustomError);});test('with extra whitespace throws CustomError', () => {expect(() => binaryStringToNumber('100')).toThrowError(CustomError);});});describe('given a valid binary string', () => {test('returns the correct number', () => {expect(binaryStringToNumber('100')).toBe(4);});});
}); 

Nested describe()blocks allow us to have a more fine-grained assignment and organization of test cases. Of course, if you don't like or are not used to using describe()is also possible, you can directly use test()the method in the top-level context, Jest will automatically wrap a test suite for it.

So far, the two most commonly used functions for organizing and writing test codes: describe(), test() / It()are introduced here. skipAdditionally, extension methods such as , , etc. are supported when using these two functions onlyto skip or filter the running of test suites and test cases under certain conditions:

test.skip("跳过这个测试", () => {expect(sum(1, 2)).toBe(3)
})

test.only("只允许这个测试", () => {expect(sum(1, 2)).toBe(3)
}) 

More APIs and details can be found in the documentation , so I won’t introduce too much here.

Seeing this, you may ask, describe(), test()Don't these functions need to be introduced before using them? The answer is no, Jest will automatically inject these global APIs into the global context before executing the test code, which can be used directly without manual introduction. If you prefer to introduce it manually, you can create a new Jest configuration file and set the value of injectGlobalsthe field to false to close the global injection.

hook function

When the test code written is complex and contains many repeated operations such as initialization, we can tear down these repeated contents and put them into the hook function for execution. The execution of a test file, test suite, and test case also has a life cycle, and the hook function splits these life cycles into two stages: pre-execution and post-execution. Jest provides four hook functions to allow users to perform some custom behaviors in these life cycles.

beforeAll()and afterAll()allow users to register a callback that will be called once before or after all tests in the current context have run.

For example, if beforeAll()you call in the top-level context:

beforeAll(() => {console.log(1)
})

describe("sum1", () => {test("测试1", () => {expect(sum(1, 2)).toBe(3)})test("测试2", () => {expect(sum(1, 2)).toBe(3)})
})

describe("sum2", () => {test("测试3", () => {expect(sum(1, 2)).toBe(3)})test("测试4", () => {expect(sum(1, 2)).toBe(3)})
}) 

console.log(1)The statement will only be executed once before the tests in both suites are executed. afterAll()It is also the same.

And if beforeAll()put in the test suite to execute:

describe("sum1", () => {test("测试1", () => {expect(sum(1, 2)).toBe(3)})test("测试2", () => {expect(sum(1, 2)).toBe(3)})
})

describe("sum2", () => {beforeAll(() => {console.log(1)})test("测试3", () => {expect(sum(1, 2)).toBe(3)})test("测试4", () => {expect(sum(1, 2)).toBe(3)})
}) 

Then console.log(1)the statement will be executed only once before the tests in the sum2 suite are executed.

beforeEach()afterEach()The callbacks registered with and are called once before or after each test run in the current context. Note the difference with beforeAll()and , the former is executed before and after each test is run, and the latter is executed before and after all tests are run.afterAll()

These four hook functions are very commonly used functions when we write test codes. We will put some simulation, initialization and state clearing logic into the hook functions. You may ask, what to do if the test files also contain some repeated logic?

Jest allows us to write a setup file that runs before the test code in each test file for cross-file configuration. We need to create a new setup file first, for example, create a jest-setup.jsfile and enter the following content:

beforeAll(() => {console.log(1)
}) 

Then create a new Jest configuration file in the root directory jest.config.jsand enter the following content:

/** @type {import('jest').Config} */
module.exports = {setupFilesAfterEnv: ['<rootDir>/jest-setup.js'],
} 

setupFilesAfterEnvThe field is used to specify the path of an array of setup files, which will be executed after the Jest execution environment (including describe()initialization of global APIs such as hook functions) is installed and before the test code runs.


The two basics about using Jest for unit testing are introduced here. Before starting the next key content, let's talk about Jest, the testing framework itself.

Through the above examples, we have started to use Jest a little bit, and fully realized the beauty of Jest's "no configuration", that is, we can start writing test code after installation, and there is no need to manually introduce related APIs, test code writing After finishing, start a line of command to start running the test, I have to say that it is really convenient.

However, what we have just shown is just a very simple scenario of testing a JS function. If the application scenario is more complicated, such as unit testing of web applications, Jest may not be as convenient as it is now. Why do you say that?

Think about it, how does Jest run the test file? Naturally, it runs with Node. To be more specific, after injecting describ()and beforeAllother global APIs, Node is used to run the test code, and handle the path resolution and loading of the imported dependencies. At this time, if a vue file is imported, the test will fail immediately, because Jest does not recognize this type of file. Even if the test code is written directly in TypeScript, it will cause the test to fail. This means that if it is a file of type .ts, .vue, .jsxJest cannot inherently support it, because it only understands JS syntax. In order to support the operation of other grammars, some transformers are needed to convert the grammars of , .ts, jsxetc. into standard JS grammars, so that the test code can continue to be executed.

For example, if you want Jest to be able to load, run .ts, and .vueformat files, you need to configure them like this:

// jest.config.js
module.exports = {transform: {'^.+\\.(j|t)sx?$': 'babel-jest','^.+\\.vue$': '@vue/vue3-jest'}
} 

We use babel to process Typescript content and delete the type annotations. We need to install @babel/core, @babel/preset-env, @babel/preset-typescriptand these packages in advance and create a new one babel.config.jsto configure the behavior of babel:

// babel.config.js
module.exports = {presets: ['@babel/preset-typescript',['@babel/preset-env', { targets: { node: 'current' } }],],
} 

If you want Jest to support the conversion of Vue files, you need to use it vue-jest. Here I use the version that supports Vue3.

In addition, Jest does not yet support the ESM specification, is still in the experimental stage, and needs to be downgraded using babel. So, if you want to use Jest to test a web application, you need more configuration. In addition, since the development and construction of Jest's testing and construction tools are carried out in two different conversion pipelines, two different configuration files need to be used, which also increases the burden of project construction in the early stage.

Therefore, I recommend using Vitest over Jest !

Vitest is an extremely fast unit testing framework supported by Vite. It relies on Vite at the bottom. With the help of Vite's powerful capabilities, Vitest supports the following excellent features:

  • Share the same configuration with Vite! If your project also chooses Vite as the build tool, then you can share the same set of configurations with Vite. This is because Vitest also uses Vite to process your test code and all the modules it imports using Vite's parser, loader, and converter, which means that when Vitest is running, the configuration in your Vite configuration file is used for All plugins outside the production environment will be invoked and take effect, and your test environment will use the same running pipeline as the development environment, without the need for additional configuration like Jest! * True out-of-the-box, and super fast! With Esbuild's natural support for TypeScript, JSX and ESM grammars, Vite has the ability to handle these types of grammars natively, which makes Vitest really out of the box, and the speed is super fast! * Tested HMR! During the process of loading and converting modules, Vite's development server will gradually build a module graph (Module Graph) that caches conversion results and records module dependencies and other information. With the help of the module diagram, Vite can clearly handle the boundaries of module hot updates and determine the scope of updates. With the help of Vite's HMR capability, Vitest can also re-run the test files that depend on the source code after modifying the source code to achieve HMR of the test. By default, Vitest will start the watch mode (watch mode), that is, automatically perform HMR, which is undoubtedly a boon for students who like to use TDD mode for development! In addition to the above three excellent capabilities supported by Vite, Vitest also has the following functions:

  • Run the test file in multiple threads. Make multiple tests run concurrently by using Worker threads to run as many test files concurrently as possible. At the same time, Vitest also isolates the running environment of each test file, so that the status of a certain file will not affect other files. * Supports common functions of most testing frameworks. Such as snapshot testing, simulation (Mock), test coverage, running tests concurrently and simulating DOM functions. * API compatible with Chai and Jest. Chai's assertion API and most of Jest's API are built in. Vue's official scaffolding create-vuehas adopted Vitest as the default unit testing framework. If you are still hesitating, think that Vitest is still a relatively new testing framework, and doubt whether it can be used in actual projects, you can read this article .

Use Vitest

After introducing the functions of Vitest, let's try its basic usage. Also install Vitest first:

npm install -D vitest 

After the installation is complete, let's write the test code and test the sum function as well:

import { describe, test, expect } from "vitest"

function sum(a, b) {return a + b
}

describe("sum", () => {test("输入 1 和 2,输出 3", () => {expect(sum(1, 2)).toBe(3)})
}) 

Since Vitest does not automatically inject global APIs by default, we need to manually introduce methods such as describe(), , test()etc. Of course, automatic injection can be enabled by configuring the globals field, but we will not enable it here.

After the test code is written, run the following command to start Vitest:

npx vitest 

Unlike Jest, Vitest will treat all js, ts, etc. files with spec and test suffixes as test files. For details, see the include field

The test passes when the console prints the following information:

The above is the basic use of Vitest. For more information about Vitest, please refer to the actual combat section below. Now let's continue to learn the common functions of unit testing, this part is the key content.

Test asynchronous code

Testing asynchronous code in real scenarios is a very common thing, such as testing backend APIs, asynchronous functions, etc. Due to the specifics of asynchronous code, it takes a lot more work to test them.

For example, we want to test the following asynchronous function:

async function hello() {return "Hello World!"
} 

To assert that it returned the "Hello World!" string, if you test it the way you test a synchronous function:

test("hello", () => {expect(hello()).toBe("Hello World!")
}) 

Running this test will directly report an error:

The reason is simple, what we are asserting is not the string "Hello World!", but the Promise object returned by this function.

After knowing the reason, we can improve it naturally:

test("hello", async () => {const res = await hello()expect(res).toBe("Hello World!")
}) 

Instead, we pass an asynchronous function into test()the method that uses the await keyword to wait for the hello function to resolve and return the result, and then we can assert on its return result.

Instead of using await to wait for an asynchronous function call to complete, we can also use resolves()the and rejects()methods. The usage is as follows:

test("hello", async () => {await expect(hello()).resolves.toBe("Hello World!")
}) 

As you can see, using resolves()can extract the resolved value from the Promise object returned by the hellow function, and then directly assert the value.

rejects()The method is used in the same way:

async function hello() {throw new Error("Hello World!")
}

test("hello", async () => {await expect(hello()).rejects.toThrow("Hello World!")
}) 

The above are the two methods of testing asynchronous code, relatively simple, I believe you can quickly master.

Process timer

Although timer callback is also a kind of asynchronous code, it does not return Promise after all, and we need to do other processing on it.

For example, test the following code:

let a = 1

function timer() {setTimeout(() => {a = 2}, 3000)
} 

How do we assert that calling the timer function sets the value of a to 2 after 3 seconds? If you use the synchronous method directly, that is:

test("timer", () => {expect(a).toBe(1)timer()expect(a).toBe(2)
}) 

The running result is naturally an error, because the second assertion will be executed before the callback call:

(Note: In this example, we assert the initial state of a before calling the function, that is, the first assertion. This is to ensure that the state of the object under test does not change unexpectedly before making a formal assertion, ensuring that the formal The result of the assertion is generated by our operation (here is calling the timer function and), rather than external interference. You can understand this step as a way to control variables .)

To assert the operation result of the timer, we can use useFakeTimers()the method . The function of this method is as the name suggests—use a fake timer. After calling useFakeTimers()the method using fake timers, all calls to timers, including setTimeout, setInterval, nextTick, setImmediate, the incoming callbacks will be "detained" in the timer queue and will not be executed, even if the specified timeout time is reached. You need to manually vi.runAllTimers()call vi.advanceTimersByTime()methods such as or to execute these callbacks. for example:

test("timer", () => {vi.useFakeTimers()expect(a).toBe(1)timer()vi.runAllTimers()expect(a).toBe(2)vi.useRealTimers()
}) 

vi.useFakeTimers()After calling fake timers, we can call vi.runAllTimers()to run all queued timer callbacks. In addition, in order to avoid affecting other tests, we also need to call vi.useRealTimers()resume . In actual scenarios, we can choose to handle these initialization and side effect removal operations in the hook function.

We can also use it vi.advanceTimersByTime(), which can only execute the callback corresponding to the timeout within the number of milliseconds passed in:

test("timer", () => {vi.useFakeTimers()expect(a).toBe(1)timer()vi.advanceTimersByTime(2000)expect(a).toBe(1)vi.advanceTimersByTime(3000)expect(a).toBe(2)vi.advanceTimersByTime(4000)expect(a).toBe(2)vi.useRealTimers()
}) 

In addition to simulating the timer, vi.useFakeTimers()it can also simulate the date (Date), the specific usage can be found in vi.setSystemTime()the method .

Simulation (Mock)

In a real test scenario, we need to deal with many modules that call the back-end API. Since calling these interfaces requires a network request, the test time will increase in disguise and the test efficiency will be reduced. Moreover, what we do is not an end-to-end test, but It is a unit test that isolates the object under test from the outside world, and does not need to initiate a real network request. In addition, in order to shield other modules, such as third-party modules, under unit testing, we also need to avoid calling them, or even forge a fake module. More importantly, many times we also need to assert that the object under test calls other modules or methods, that is, monitor. In this case, mock (Mock) comes in handy.

The way of simulation can be roughly divided into two types: stub (stub) and spy (monitoring).

A stub will change the implementation of the mocked object, that is, fake another version to replace the mocked object. In contrast, spy does not need to change the implementation of the simulated object, but will monitor its use, such as the number of times the function is called, the parameters passed in, and so on.

Here I only classify the simulation methods according to "whether the implementation has been changed". There are also classifications that divide the simulation into mock, stub, fake, etc. In most cases, they are collectively referred to as mock (Mock).

Next, I will introduce the specific use of the simulation in the order of the simulation function and the simulation module.

mock function

For example, now we want to monitor calls to the sum method of the obj object, and obtain information such as the number of calls to the method, parameters, return values, etc. How to do it:

const obj = {sum: (a: number, b: number) => {return a + b}
} 

You can use vi.spyOn()the method :

test("spy", () => {vi.spyOn(obj, "sum")obj.sum(1, 2)expect(obj.sum).toBeCalledTimes(1)expect(obj.sum).toBeCalledWith(1, 2)expect(obj.sum).toHaveReturnedWith(3)vi.mocked(obj.sum).mockClear()
}) 

vi.spyOn()You can listen to methods on an object. After calling the monitored function, we can assert the call information through such matchers as toBeCalledTimes(), , and so on.toBeCalledWith()

vi.spyOn()Returns an object of type SpyInstance, we can also make assertions directly on this object, for example:

test("spy", () => {const spy = vi.spyOn(obj, "sum")obj.sum(1, 2)expect(spy).toHaveBeenCalledOnce()expect(spy).toHaveBeenNthCalledWith(1, 1, 2)expect(spy).toHaveReturnedWith(3)spy.mockClear()
}) 

You may have noticed that we call a mockClear()method , which is used to clear all call information of the mocked object. The purpose of using it vi.useRealTimers()is the same as , in order not to affect other tests. There are similar methods mockReset()and mockRestore(), the former is used to clear the call information and set the implementation of the simulated object to an empty function, and the latter is used to clear the call information and restore the original implementation of the simulated object. In this example, since we only monitor the function without modifying the internal implementation, calling mockClear()is enough.

mockClear()Calling , on each mock object mockReset()will soon become a repetitive behavior, we can use vi.clearAllMocks(), vi.resetAllMocks()and to perform these operations on all mock objects vi.restoreAllMocks()at once , and usually put the calls to these three methods in the hook function.

mockClear()is a method on SpyInstance and MockInstance types, so we can call it directly on the vi.spyOn()returned object, if we want to call this method directly on the original function, like this:

obj.sum.mockClear() 

If you're using JS, this will work, but if you're using TS, it will report an error directly. In this case, you can use vi.mocked()to provide type support for the object being mocked:

vi.mocked(obj.sum).mockClear() 

What if we want to mock a function exported by another module? for example:

// math.ts
export function sum(a: number, b: number) {return a + b
} 

At this point we can import the math module as a namespace:

import * as math from "./math"

test("spy", () => {vi.spyOn(math, "sum")math.sum(1, 2)expect(math.sum).toBeCalledTimes(1)expect(math.sum).toBeCalledWith(1, 2)expect(math.sum).toHaveReturnedWith(3)vi.mocked(math.sum).mockClear()
}) 

It can be seen that it is very simple and crude. Seeing this, you may have questions: Can you only monitor methods on objects, can't you directly monitor functions?

As far as I know, it doesn't seem to be possible. If you really want to listen to the function directly, you can do this:

import { sum } from "./math"

test("spy", () => {const math = { sum }vi.spyOn(math, "sum")math.sum(1, 2)expect(math.sum).toBeCalledTimes(1)expect(math.sum).toBeCalledWith(1, 2)expect(math.sum).toHaveReturnedWith(3)vi.mocked(math.sum).mockClear()
}) 

Just drop it directly on an object.

After learning how to monitor functions, let's see how to simulate a function. For example, we want to simulate the sum function as follows:

function sum(a: number, b: number) {return a + b + 100
} 

mockImplementation()The method can be called directly on SpyInstance :

test("mock", () => {vi.spyOn(obj, "sum").mockImplementation((a, b) => a + b + 100)obj.sum(1, 2)expect(obj.sum).toHaveReturnedWith(103)vi.mocked(obj.sum).mockRestore()
}) 

mockImplementation()Methods can be used directly on SpyInstance and MockInstance (inherited from SpyInstance) to simulate the implementation of the mocked object. Since we changed the internal implementation of sum, we need to call mockRestore()to .

Analog module

After introducing how to monitor and simulate functions, let's take a look at how to simulate modules.

The simulation module needs to use vi.mock()the method . For example, to simulate the math module just now, we can do this:

import { sum } from "./math"
 
vi.mock('./math')

test("mock", () => {sum(1, 2)expect(sum).toHaveBeenCalledOnce()expect(sum).toHaveBeenCalledWith(1, 2)vi.mocked(sum).mockRestore()
}) 

After we pass the path of the module to be simulated into vi.mock()the method , the method will automatically simulate all the exported content of the simulated module, so when we call an exported function of the module, we can directly assert it.

We can also pass in a factory function to define what the module should export, for example:

vi.mock('./math', () => ({sum: (a: number, b: number) => a + b + 100
})) 

We mock the exports of the math module, which returns a new sum method. But running the test found that the test failed:

This is because we are only mocking the math module, not the sum function it exports. Let's learn another way to simulate functions: vi.fn().

Calling vi.fn()will return an empty mock function of Mock type. Mock also inherits SpyInstance, and we can directly call toHaveBeenCalledOnce()matchers such as and so on. Calling the module function directly will return undefined. We can pass a function vi.fn()into to mock the implementation of the mock function it returns. The above code can be modified to:

vi.mock('./math', () => ({sum: vi.fn((a: number, b: number) => a + b + 100)
})) 

After running the test, it shows that the test passed, indicating that our simulation was successful.

If we only want to simulate a specific function exported by the module, and keep other exports as they are, we can do this:

import { sum } from "./math"
import * as Math from "./math"
 
vi.mock('./math', async () => ({...await vi.importActual<typeof Math>('./math'),sum: vi.fn((a: number, b: number) => a + b + 100)
})) 

vi.importActual()All exports of a module can be imported unchanged. Note When using TS, remember to pass in the type.

In addition to passing in a factory function, we can also put the export content to be simulated into a __mocks__directory , so that vi.mock()if a file with the same name exists in __mocks__the directory when calling , all imports will return its export. For example, create a new directory under __tests__the directory __mocks__, and then create a math.tsfile with the following content:

import { vi } from "vitest"

export const sum = vi.fn((a: number, b: number) => a + b + 100) 

Then modify the mock code of the test to:

vi.mock('./math') 

Rerun the test and the test will pass.

Note that calls vi.mock()to are automatically hoisted to the top-level context, even if it is called within a test suite or test. So if you just want to mock a module inside a certain suite or test, you can use vi.importMock()the method :

import * as Math from "./math"

test("mock", async () => {const { sum } = await vi.importMock<typeof Math>('./math')sum(1, 2)expect(sum).toHaveBeenCalledOnce()expect(sum).toHaveBeenCalledWith(1, 2)sum.mockRestore()
}) 

This method is used in the vi.mock()same , except that the simulated behavior is defined within the test suite or test. In addition, calling this method will return the intersection type of the original function type and the Mock type, so the type information can be obtained vi.mocked()without .

Simulate global variables

The method of simulating global variables is relatively simple, vi.stubGlobal()just . Here is an example of a document posted directly:

import { vi } from 'vitest'

const IntersectionObserverMock = vi.fn(() => ({disconnect: vi.fn(),observe: vi.fn(),takeRecords: vi.fn(),unobserve: vi.fn(),
}))

vi.stubGlobal('IntersectionObserver', IntersectionObserverMock) 

test coverage

Many people will wonder whether they have written enough tests after writing unit tests. At this time, they will check whether the coverage rate of the tests is high enough.

Test coverage, as the name suggests, is to check the proportion of the source code covered by the test to the total source code. Vitest supports getting test coverage through c8 and istanbul , let's try it.

Vitest uses c8 by default, we need to install the corresponding package first:

npm i -D @vitest/coverage-c8 

Then update the test code, this time we still test the sum function:

import { test, expect } from "vitest"
import { sum } from "../src/math"
 
test("sum", () => {expect(sum(1, 2)).toBe(3)
}) 

Then enter the following command at the command line:

npx vitest run --coverage 

Then the console outputs a test coverage report:

The meanings of the above four parameters are: statement coverage (statements), branch coverage, function coverage and line coverage. At the same time, a coverage directory is generated under the root directory, which records more detailed statistical information.

The way to use istanbul is similar, just install the corresponding package, so I won’t go into details here.

The principle of istanbul is to translate the source code, insert the code used to record a function, the number of times a statement is called, etc., and then store the recorded information in a variable. After the test is completed, the statistics can be obtained through this variable to generate a coverage report. And c8 directly uses the built-in coverage statistics of the V8 engine, and generates reports directly after the test is completed.

In actual projects, in order to ensure the quantity or quality of single tests written by programmers, the threshold of test coverage will be limited, and then check whether the test coverage reaches this threshold before code submission or in the integrated pipeline. Let's try it out.

If you are using Vite, then you can configure it directly vite.config.tsin :

/// <reference types="vitest" />
import {defineConfig} from "vite"

export default defineConfig({// 其它配置项...test: {coverage: {lines: 80,functions: 80,branches: 80,statements: 80}},
}) 

We use 80% as the threshold. For demonstration purposes, let's modify the implementation of the sum function:

export function sum(a: number, b: number) {if(a > 100) {return 100}return a + b
} 

Then run the same command just now to run the test, and the coverage report is as follows:

Because the threshold is not reached, the console reports an error, and then we can observe which branches or lines of the code are not covered, and supplement test cases for it. This method of creating test cases for testing based on the internal implementation of the program, such as branches and functions, is actually a white-box test. If you want to judge the coverage of black-box testing, it can be obtained by judging the proportion of the equivalence class corresponding to the test case used in the total equivalence class (including valid equivalence class and invalid equivalence class) and boundary value. Interested friends can check relevant information by themselves.

This is the end of how to output test coverage and use it for coverage check. Test coverage, as a means of checking the adequacy of unit tests, is indeed an effective tool to a certain extent. However, high test coverage is not equal to high test quality. In many cases, high test coverage is actually a number that is easy to achieve. For example, we can change the test case to this:

test("sum", () => {expect(sum(1, 2)).not.toBe(100)
}) 

The above test case is: input 1 and 2, will not output 100. The test coverage rate reached 100%, which exceeded the task requirements, but is the quality of this test really high? The answer is obviously no, because this test case does not make any sense, we should assert that it returns the correct result (ie 3), instead of asserting that it returns other irrelevant numbers, unless exhaustive, asserting that it is not equal to All numbers other than 3, which is obviously impossible.

Many people aim at high coverage when writing tests, and will be proud of 100% coverage, but this is of no use. What you should do and think about is how to design high-quality test cases instead of staring at them. With a number of crazy heaps of use cases. In many cases, even if it reaches 100%, it does not mean that there is no problem with the program. As mentioned at the beginning of the article, software testing is to check whether it meets the specified requirements, or to find potential errors in the program.

Martin Fowler mentioned in this article : High test coverage does not mean anything, but it has an effect in helping to check places in the source code that have not been tested. He believes that if you do the following two things, it means that you have written enough tests:

  • You rarely encounter bugs in production; * you rarely hesitate when modifying code, fearing that it will cause a production accident. That's all I have to say about test coverage, I hope it will improve your awareness of test coverage.

Configure browser-like environment

Since unit testing frameworks such as Vitest and Jest run in the Node environment, if we want to test a web application and perform component testing, we need a browser-like environment to support browser features such as DOM API, local storage, and cookies.

Vitest provides the environment option to configure the test environment. Besides Node, it also supports jsdom , Happy DOM and Edge Runtime .

jsdom is a JS implementation of many web standards for the Node environment. Examples of its use are as follows:

const jsdom = require("jsdom")
const { JSDOM } = jsdom

const dom = new JSDOM(`<!DOCTYPE html><p>Hello world</p>`)
console.log(dom.window.document.querySelector("p").textContent) 

As you can see, as long as the HTML string is passed into the JSDOM constructor, many Web APIs, querySelector()including , .

Although jsdom implements many Web APIs, it runs in a simulated browser environment (that is, a headless browser) after all, and many features still cannot be implemented. One is layout (layout), that is, it is impossible to calculate the position of an element in the page. The layout, such as the position in the viewport ( getBoundingClientRects()) and offsetTop and other properties; one is navigation. Therefore, in some scenarios, using jsdom and Happy DOM to test in the Web environment may not meet your needs well. In this case, you need to make the object under test run on a real browser, such as using Cypress to test.

Happy DOM implements many features of web browsers like jsdom, it has higher performance than jsdom, but implements a little less features.

Let's use jsdom to configure the browser-like environment. First, we need to install jsdom:

npm -D install jsdom 

Then modify the configuration:

// vite.config.ts
test: {environment: "jsdom",
}, 

Then you can use the Web API globally:

test("dom", () => {const div = document.createElement('div')div.className = 'dom'document.body.appendChild(div)expect(document.querySelector('.dom')).toBe(div)
}) 

0.23.0Starting from , Vitest supports the use of custom environments. It is necessary to create a vitest-environment-${name}package named in the format of exporting environment objects, and also export populateGlobalthe method to facilitate the filling of global objects. You can click here to view the guidelines.

usejest-dom

When testing DOM in the web environment, we will find that it is cumbersome to assert DOM nodes, such as asserting whether it has a certain attribute, whether it is visible, whether a button is disabled, whether the input box is focused, etc., we Usually, we need to call multiple DOM APIs to gradually extract the desired attributes or values ​​to achieve our goal.

jest-domMany Jest matchers are provided to help us simplify these steps. Since Vitest is compatible with Jest's assertion style, jest-dom can also be used on Vitest. Let's try it out.

First install:

npm install -D @testing-library/jest-dom 

After installation, we need to apply these matchers. We can choose to do this in the setup file:

// __tests__/vitest-setup.ts
import '@testing-library/jest-dom' 

Note that when this package is imported it internally uses expect.extend()the method to apply these custom matchers, which means expectmust be a global API. By default, Vitest closes the global API injection, we can manually open it, and configure the path of the setup file:

/// <reference types="vitest" />
import path from "path"
import { defineConfig } from "vite"

export default defineConfig({// 其它配置项...test: {globals: true,environment: "jsdom",setupFiles: path.resolve(__dirname, '__tests__/vitest-setup'),},
}) 

If you don't like to enable global injection, you can change the content of the setup file to this:

// __tests__/vitest-setup.ts
import matchers from '@testing-library/jest-dom/matchers'
import { expect } from 'vitest'

expect.extend(matchers) 

Now you can use the matchers provided by jest-dom:

test("dom", () => {const div = document.createElement('div')div.className = 'dom'document.body.appendChild(div)expect(div).toBeInTheDocument()
}) 

The number of matchers provided by jest-dom is not large, only about 20. It is recommended that you go to the warehouse to read them all and get familiar with them.

snapshot test

A snapshot is a serialized string that you can use to ensure that the output of the object under test does not change. The usage is as follows:

test("sum", () => {const res = sum(1, 2)expect(res).toMatchSnapshot()
}) 

toMatchSnapshot()The matcher is used to compare the passed in expectation with a previously saved snapshot. When using it for the first time, a new __snapshots__directory to store the snapshots in each test file. The content is roughly as follows:

// Vitest Snapshot v1

exports[`sum 1`] = `3`; 

When toMatchSnapshot()the matcher a comparison will be made, and an error will be reported if it does not match, for example:

test("sum", () => {const res = sum(100, 200)expect(res).toMatchSnapshot()
}) 

Modify the test code, and after re-running the test, an error will be reported:

If the snapshot mismatch is the expected behavior, you can type 'u' at the console to update the failed snapshot.

If you don't want to save snapshots in another directory, you can choose to inline snapshots, using toMatchInlineSnapshota matcher :

test("sum", () => {const res = sum(1, 2)expect(res).toMatchInlineSnapshot()
}) 

toMatchInlineSnapshot()Run the test after using , the generated snapshot will be directly written as a parameter inside the brackets of the matcher:

test("sum", () => {const res = sum(1, 2)expect(res).toMatchInlineSnapshot('3')
}) 

Another use of snapshot testing is recommended in the Jest documentation: testing React components, with the following example:

import renderer from 'react-test-renderer';
import Link from '../Link';

it('renders correctly', () => {const tree = renderer.create(<Link page="http://www.facebook.com">Facebook</Link>).toJSON();expect(tree).toMatchSnapshot();
}); 

The above code renders the Link component, and then performs a snapshot test on the serialized result. The saved snapshot looks like this:

exports[`renders correctly 1`] = `
<aclassName="normal"href="http://www.facebook.com"onMouseEnter={[Function]}onMouseLeave={[Function]}
>Facebook
</a>
`; 

By performing a snapshot test on the rendering result of the component, it is very convenient to find out where the modified content does not match the previous version, and then fix or update it.

However, you should not rely too much on snapshot testing, nor should you over-snapshot testing components, because snapshot testing cannot express the intent of the test case well and only compares the serialized results. When the snapshot does not match, we cannot Immediately determine whether this is due to a bug somewhere in the code or a normal phenomenon after the code is updated. In order to find out the cause of the mismatch, we may waste a lot of time on this mismatch, or even give up thinking and arbitrarily choose to update the snapshot.

Snapshot testing is a double-edged sword. It may be useful in some scenarios, but it may also take the test to the other extreme. I personally recommend that developers write intentional tests, start with the input and output of the program, and focus on designing high-quality test cases.

Kent C. Dodds, the author of Testing Library , introduced several places that he thinks are very suitable for snapshot testing in his blog . Interested students can take a look.


So far, the introductory content on how to use the unit testing framework for automated testing has been introduced here. I believe you have gained a lot. Next, we will start to enter the actual combat part, and conduct unit testing and component testing on a small web application.

Front-end automated testing practice

Let's unit test and component test the following address list applet:

The technology stack is mainly Vue3, Pinia and TypeScript. The source code warehouse is here , and I also provide a branch using Vuex, you can pull it down and learn while comparing.

Preparation

Using Vue Test Utils

Vue Test Utils is the official component mounting library, it provides many useful APIs to support the testing of Vue components, let's try it.

First install the package:

npm install -D @vue/test-utils 

Then create a new test file and enter the following code:

import { expect, test } from 'vitest'
import { mount } from '@vue/test-utils'
import { defineComponent } from 'vue'

const Component = defineComponent({template: '<p>Hello World!</p>',
})

test('Component', () => {const wrapper = mount(Component)expect(wrapper.find('p').text()).toBe('Hello World!')
}) 

After running the test, the terminal will display that the test passed.

We use the mount method to mount the component. The mount method will first create a parent component containing the component, then call createApp()the method create a Vue application, pass in the parent component as the root component, and finally mount it on a div DOM node.

The mount method also supports passing in a configuration object to support more configurations for component rendering or initialization. I pick a few of the more commonly used configuration items and list them below:

  • data: used to override the default data data of the component, for example: const Component = defineComponent({data() {return {msg: 'Hello World!',}},template: '<p>{ { msg }}</p>',})test('Component', () => {const wrapper = mount(Component, {data() {return {msg: '111',}},})expect(wrapper.find('p').text()).toBe('111')}) * props: set the props of the rendering component: const Component = defineComponent({props: {msg: {type: String,required: true,},},template: '<p>{ { msg }}</p>',})test('Component', () => {const wrapper = mount(Component, {props: {msg: 'Hello World!',},})expect(wrapper.find('p').text()).toBe('Hello World!')}) * globals: * plugins: set the plugins to be applied to the created app; * stubs: set the subcomponents of the component to be tested Stub, this option can be useful when you don't want to render some child components or mock child components.
  • shallow: You can set this option to true when you don't want to render all child components. Several fields of the configuration options of the mount method are introduced here. In order to avoid too much space, it is recommended that you read the corresponding API documentation , which is very detailed.

Calling the mount method will return an object of type VueWrapper, which provides many tool methods to facilitate assertion on the component or update the state of the component. For example, the text method in the above examples can return the text content of an element. Here are a few other commonly used methods. For more details, see here :

  • emitted: Returns all events emitted by the component. Examples are as follows: const Component = defineComponent({emits: ['fetch'],setup(props, { emit }) {emit('fetch', '123')},})test('Component', () => {const wrapper = mount(Component)expect(wrapper.emitted()).toHaveProperty('fetch')expect(wrapper.emitted('fetch')?.[0]).toEqual(['123'])}) * find: Query the DOM nodes in the component and return an object of DOMWrapper type. DOMWrapper is similar to VueWrapper in use, and can use many tools and methods; * trigger: trigger component DOM event: const Component = defineComponent({data() {return {count: 0,}},template: '<button @click="count++">{ { count }}</button>.',})test('Component', async () => {const wrapper = mount(Component)const button = wrapper.find('button')expect(button.text()).toBe('0')await button.trigger('click')expect(button.text()).toBe('1')}) Note, in order to ensure that the DOM has been updated when the assertion is made after the event is triggered, the trigger method returns a Promise, which is only available after the DOM is updated It will resolve, so we need to await; * unmount: Unmount the component. Vue Test Utiles also exposes a flushPromises method that is called and awaited to ensure that all microtasks (including DOM updates) are executed to completion. It internally uses both macrotasks and microtasks to achieve this.

The basic use of Vue Test Utiles is introduced here. The reason why the introduction is relatively short, in addition to saving space, is that we do not use it as a mount library for Vue components. We use Vue Testing Library .

Using the Vue Testing Library

Vue Testing Library is a testing library for Vue, which internally depends on DOM Testing Library and Vue Test Utils. Compared with Vue Test Utils, Vue Testing Library can use a more concise API to interact with components. It abandons the API that needs to be used when operating and querying Vue components, and simplifies these operations to the most primitive , more abstract native DOM operations.

Testing Library is a library that focuses on simulating user behavior for testing. It only exposes APIs that allow users to test in a way that is close to user usage. Its guiding principles are:

The more your tests resemble the way your software is used, the more confidence they can give you.

This is also our test principle for testing components, that is, our tests should not rely too much on the internal implementation of the object to be tested, but think about its input and output from the perspective of a user. In most cases, for a component , its input can be: user interaction, Props, other data input from the outside (such as store, API call); its output can be: view, event, other API calls (such as calling router, store method).

Only focusing on the input and output of components allows us to write easy-to-maintain test code, giving us the confidence to refactor the code, and when we iterate, the test will fail at the right time, instead of reporting an error directly after changing the class name .

Vue Testing Library uses Queries API to query the DOM nodes inside the component. Queries API is a method introduced from DOM Testing Library. Let's briefly introduce it.

(Although we use Vue Testing Library for testing, I still recommend you to read the documentation of Vue Test Utiles, because the former is also developed based on Vue Test Utiles, and the configuration fields of the rendering component and the method of updating the component overlap; in addition, Its documentation also systematically introduces how to test a Vue component, including custom events, routing, state management, etc., which is worth reading.)

Queries

If only one DOM node is queried, the Queries API can be divided into three types according to the results of the DOM query:

  • getBy**: When no query is found or multiple results are found, an error is reported;
  • queryBy**: returns null when there is no query, and reports an error when multiple results are queried;
  • findBy**: Query the DOM asynchronously. When no query is found or multiple results are found, an error will be reported and a Promise will be returned. This can be useful when querying the DOM that changes only after the view is updated.

If you want to query multiple DOM nodes:

  • getAllBy**: The query result returns an array, and the others are the same as getBy**;
  • queryAllBy**: returns an empty array when no query is found, returns an array when a query is found;
  • findAllBy**: The query result returns an array, and the others are the same as findBy**.

According to the way of querying DOM, it can be divided into 8 types. You can see here for details , so I won’t list them here. At the same time, the document also prioritizes the usage of these APIs .

DOM Testing Library essentially calls various DOM APIs (such as) on a given DOM element and querySelector()finally returns the query results. The usage is roughly as follows:

const input = getByLabelText(container, 'Username') 

As you can see, you need to pass in a root node when using it, and DOM Testing Library will query its child elements.

Since the root node of a Vue component is generally fixed, the Vue Testing Library modifies the implementation of the Queries API, omitting the input of the root node:

const { getByText } = render(Component)

getByText('Hello World!') 
render

The render method is used to mount Vue components, which is equivalent to the mount method of Vue Test Utils, but slightly different. The interface is as follows:

function render(Component, options, callbackFunction) {return {...DOMTestingLibraryQueries,container,baseElement,debug(element),unmount,html,emitted,rerender(props),}
} 

The usage is similar to the mount method, but returns the Queries API and several variables and methods, see here for details .

The internal implementation of the render method is also very simple, roughly modifying the node on which the component is mounted and then calling the mount method.

fireEvent

The fireEvent method, as its name implies, is used to trigger events for DOM nodes, and is used as follows:

await fireEvent.click(getByText('Click me')) 

Like the trigger method of Vue Test Utils, in order to ensure that the DOM is updated, calling it will return a Promise, which we need to await.

The principle of fireEvent is to call the dispatchEvent method on the incoming element to trigger the event, and then call Vue Test Utils flushPromises()to wait for DOM to update.

cleanup

The cleanup method is used to unmount all mounted components. Vue Testing Library internally maintains a list of mounted components, and when the render function is called, the rendered components will be added to the list. When cleanup is called, the unmount method of Vue Test Utils will be called for each component in the list to uninstall.

By default, Vue Testing Library will call the cleanup function in the afterEach hook, so we don't have to call it manually. But there is another problem that needs to be paid attention to, which we will talk about later.


The basic use of Vue Testing Library is introduced here. There are not many APIs, and it is very easy to get started. In addition, its source code is not much, less than 200 lines. Interested students can read it.

Inline component library

If the component we are testing depends on the components provided by the component library, an error may appear under Vitest:

It can be seen from the error message that Vitest cannot recognize the CSS file of a certain component of vant. This problem occurs because Vitest does not convert the modules node_modulesof , but directly sends them to Node for execution, so of course it does not recognize CSS files. The reason for this is that the packages node_modulesin are generally in the ESM or CJS format that Node can recognize. For performance considerations, of course they do not need to be processed, and Vitest will not include them in the module diagram.

So the solution to this error is already very clear, that is to let Vitest convert the vant, you can use deps.inlinethe option to achieve this purpose:

// vite.config.ts
test: {deps: {inline: ['vant'],},
}, 

other configuration

The directory structure of the test can be directly copied from the src directory, which is convenient for maintenance and later iterations.

vi.useFakeTimers()Remember to do this if you want to use :

vi.useFakeTimers({toFake: ['setTimeout', 'clearTimeout'],
}) 

The specific reason can be found in this issue . In the example project I have put the above code in the setup file.

If you use Vite, you also need to add a configuration to the configuration file:

resolve: {conditions: process.env.VITEST ? ['node'] : [],
}, 

See this issue for details .

Finally, test.globalsset to true, why? In order to be compatible with Jest ecology. Most libraries these days are Jest-compatible, which means they assume APIs like expect, afterEach, etc. are available globally.

For example, Vue Testing Library will call the cleanup function in the afterEach hook to uninstall all Vue components:

if (typeof afterEach === 'function' && !process.env.VTL_SKIP_AUTO_CLEANUP) {afterEach(() => {cleanup()})
} 

If globals is not enabled, we need to manually call cleanup.

For another example, jest-dom above still needs to introduce all matchers and then manually expand it, as well as the createTestingPinia method for testing provided by Pinia to be introduced later. Therefore, in order to avoid unexpected problems during testing, it is recommended to enable globals.

Test LoginForm

The first example of testing actual combat, let's test LoginForm, the login form component. The function is very simple. When submitting, call the API to log in. After the login is successful, store the token and call the router to jump to a new page. In addition, it also contains small functions for form validation and button disabling.

So the functions we want to test and the corresponding use cases are as follows:

  • After filling out the form and successfully logging in, store the token in local storage. The input is the user filling out the form, and the output is the token field of localStorage. Since jsdom provides support for local storage, we can directly call localStorage. If it is not supported, mocking is required; * Fill in the form and log in successfully 1 second later, call the router to jump to the page. The input is the user filling out the form, and the output is the call router.replace()and the parameters passed in. So we need the mock vue-routermodule to assert the call to useRouter()the peer method. You may ask, why not just assert the URL of the page after jumping to the new page? Since we only mounted the component to be tested, if we want to jump to a new page, we need to use the RouterView component, and then we need to mount an APP component to place the RouterView, and then configure the routing table to create a router instance and apply it to the APP in the component. It can be seen that the workload is still very large, if you don't find it troublesome, you can do this. But I personally think that it is meaningless to do so, and it is still a simulated router in essence, so it vue-routeris enough to directly mock the module. * The submit button is disabled when the form is submitted, and the button is enabled when the submission fails; the input is to submit the form, and the output is the status of the button; * Form validation: if the input box is out of focus or there is an unfilled form when the form is submitted, a prompt message will be displayed. The input is to lose focus of the input box and submit the form, and the output is to display the prompt information. According to this idea, it is a recommended practice to design test cases for the input and output of component functions.

A request needs to be initiated when logging in, so the backend API that is called needs to be simulated. Through the previous study, you should know how to simulate functions, like this:

import * as loginAPI from '~/api/userManagement'

vi.spyOn(loginAPI, 'login').mockImplementation(vi.fn().mockResolvedValue({token: 'acbdefgfedbca123',
})) 

If you want to simulate returning a successful result, you can use the mockResolvedValue method as above, which can simulate returning a resolved Promise. If you want to mock the failure result, you can use the mockRejectedValue method:

vi.mocked(loginAPI.login).mockImplementation(vi.fn().mockRejectedValue('rejected')) 

Now we can write our first test case:

 describe('填写表单进行登录', () => {test('输入用户名和密码进行登录可以登录成功, 将 token 存储到本地存储中', async () => {// 模拟后端 APIvi.spyOn(loginAPI, 'login').mockImplementation(vi.fn().mockResolvedValue({token: 'acbdefgfedbca123',}))const { getByPlaceholderText, getByTestId } = render(LoginForm)expect(localStorage.getItem('token')).toBeNull()await fireEvent.update(getByPlaceholderText('用户名'), 'jeanmay')await fireEvent.update(getByPlaceholderText('密码'), 'password123456')await fireEvent.submit(getByTestId('form'))expect(localStorage.getItem('token')).toBe('acbdefgfedbca123')// 清除本地存储localStorage.removeItem('token')vi.clearAllMocks()})}) 

Note that I divided the code in the test into four steps:

  • The first step is to initialize before testing, and complete the preparations for mocking API, rendering components and "control variables";
  • The second step is to test, which is to trigger the input and output that were originally specified. Here we fill in the content of the form and submit it. Be sure to remember to await it after calling fireEvent to ensure that the view is updated;
  • The third step is to assert whether the output of the assertion is in line with our expectations, here it is asserted whether there is our simulated token in the local storage;
  • Finally, it is the end of the test. Some state or side effects are cleared in this step. Here we have completed the deletion of locally stored tokens and simulated API call records. In addition, Vue Testing Library will automatically uninstall components for us.

These four steps are very important. Organizing the test code in this way can clearly express the intent of the test and ensure the independence and maintainability of the test.

Some repetitive initialization and finishing work can be extracted and placed in the hook function or referred to a higher-level scope. After extraction, the final code is as follows:

describe('LoginForm', () => {afterEach(() => {vi.clearAllMocks()})describe('填写表单进行登录', () => {vi.spyOn(loginAPI, 'login').mockImplementation(vi.fn().mockResolvedValue({token: 'acbdefgfedbca123',}))afterEach(() => {localStorage.removeItem('token')})test('输入用户名和密码进行登录可以登录成功, 将 token 存储到本地存储中', async () => {const { getByPlaceholderText, getByTestId } = render(LoginForm)expect(localStorage.getItem('token')).toBeNull()await fireEvent.update(getByPlaceholderText('用户名'), 'jeanmay')await fireEvent.update(getByPlaceholderText('密码'), 'password123456')await fireEvent.submit(getByTestId('form'))// await waitFor(() => expect(localStorage.getItem('token')).toBe('acbdefgfedbca123'))expect(localStorage.getItem('token')).toBe('acbdefgfedbca123')})})
}) 

Next, write the code for the second use case. Since router is used, we need to simulate the vue-router module. The simulation code is as follows:

import type * as VueRouter from 'vue-router'

const replace = vi.fn()
vi.mock('vue-router', async () => ({...await vi.importActual<typeof VueRouter>('vue-router'),useRouter: () => ({replace,}),
})) 

Since the source code is used router.replace(), here we only need to simulate useRouter and replace is enough.

I will post the test code directly:

test('输入用户名和密码进行登录可以登录成功, 1 秒后调用 router.replace()', async () => {const { getByPlaceholderText, getByTestId } = render(LoginForm)expect(replace).not.toHaveBeenCalled()await fireEvent.update(getByPlaceholderText('用户名'), 'jeanmay')await fireEvent.update(getByPlaceholderText('密码'), 'password123456')await fireEvent.submit(getByTestId('form'))vi.advanceTimersByTime(1000)expect(replace).toHaveBeenCalledTimes(1)expect(replace).toHaveBeenCalledWith('/address/shipAddress')
}) 

Since the timer is used in the source code, we still need to use it vi.useFakeTimers(). This work has been done in the setup file and there is no need to do it again.

Several other tests are relatively simple, so there is no need to talk about them. This is the end of the introduction to testing LoginForm. In this section, I talked about how to design test cases from the perspective of input and output according to the functions of the components to be tested, the four steps of organizing test codes and the way of common simulation modules.

In addition, there are several common techniques, such as using toBeInTheDocument()a matcher to determine whether the DOM exists, use toBeEnabled(), toBeDisabled()determine whether the button is disabled or enabled, and so on.

testAddressListItem

The AddressListItem component receives address information through Props, and then renders it to the view, jumps to a new page when clicked, and throws a longTouch event when long pressed for one second.

Based on the method provided in the previous section, it should be easy to figure out how to design test cases, so I won't introduce it here. Here we will elaborate on the function of clicking to jump to a new page, because this process involves calling the store.

The state management used in this project is Pinia. Pinia provides the createTestingPinia method to simplify the complexity of testing. The usage is as follows:

render(Component, {global: {plugins: [createTestingPinia()],},
}) 

Calling createTestingPinia will return a pinia instance specially used for testing. global.pluginsAfter , all access to the store will return a simulated store instead of the originally defined store, so we don't have to worry about calling the action on the store or modifying it. The state in it can affect other tests or the store in the source code. This simulated store is no different from the original one, the only difference is that pinia will vi.fn()replace all actions with a simulated function (for example), so we can directly monitor these actions without worrying that it will initiate network requests or modify state.

(Note: createTestingPinia assumes vi.fn()or jest.fn()can be obtained globally, so globals needs to be enabled)

The source code for modifying the action is as follows:

const createSpy = _createSpy || typeof jest !== "undefined" && jest.fn || typeof vi !== "undefined" && vi.fn;
if (!createSpy) {throw new Error("[@pinia/testing]: You must configure the `createSpy` option.");
}
pinia$1._p.push(({ store, options }) => {Object.keys(options.actions).forEach((action) => {store[action] = stubActions ? createSpy() : createSpy(store[action]);});store.$patch = stubPatch ? createSpy() : createSpy(store.$patch);
}); 

stubActions is an option passed into createTestingPinia. As you can see, if stubActions is false, the original implementation will be used and monitoring will be started.

In addition to passing in the stubActions option, we can also set the initial value of the state of the store:

render(Component, {global: {plugins: [createTestingPinia({initialState: {counter: { n: 20 },},}),],},
}) 

If we need to change the value of the getter, we can also force it to be written:

const counter = useCounter()

// @ts-expect-error: usually it's a number
counter.double = 2 

But you need to use @ts-expect-errorannotations to bypass TS compiler checks.

Next, let's test the use case of "setting the currentAddressId of the store after clicking". The code is as follows:

const renderAddressListItem = () => {return render(AddressListItem, {props: {addressInfo,},global: {plugins: [createTestingPinia()],},})
}

describe('AddressListItem', () => {afterEach(() => {vi.clearAllMocks()})test('点击后设置 store 的 currentAddressId', async () => {const { getByTestId } = renderAddressListItem()const address = useAddressStore()expect(address.currentAddressId).toBe('')await fireEvent.click(getByTestId('item'))expect(address.currentAddressId).toBe(addressInfo.addressId)})
}) 

When there are many and repeated configuration items to call render, this operation can be separated into a function, here is the renderAddressListItem function, which initializes the address information for display, and calls the createTestingPinia method.

The test code is relatively simple, there is nothing to talk about, and the way of using and asserting the store is similar to testing the router. The main thing is to learn how to use the createTestingPinia method.

test-AddressList

The AddressList component calls the action of the store to obtain the address list data and passes it to the AddressListItem. After the address list is obtained and the number of the address list changes, the fetch event will be thrown. In addition, the longTouch event of the AddressListItem is monitored, and the action is called in the event callback to delete the address list item.

Let's take a look at the code of the test "get and display address list information":

test('获取并展示地址列表信息', async () => {const { findAllByTestId } = renderAddressList()expect(await findAllByTestId('item')).toHaveLength(3)
}) 

Since the action is called in the source code to initiate a request to obtain the address list, this is an asynchronous process, so you need to use findAllByTestId().

The functions we encapsulate for rendering components are as follows:

const renderAddressList = (stubs = false) => {const spy = () => {return vi.fn(async () => {const address = useAddressStore()address.addressInfoList.push(...mockedAddressInfoList)})}if (stubs) {const AddressListItem = defineComponent({emits: ['longTouch'],setup(props, { emit }) {const emitLongTouch = async () => {emit('longTouch')}emitLongTouch()},template: '<div />',})return render(AddressList, {global: {stubs: {AddressListItem,},plugins: [createTestingPinia({createSpy: spy,})],},})}else {return render(AddressList, {global: {plugins: [createTestingPinia({createSpy: spy,})],},})}
} 

Since the following test cases will test the functional logic of receiving the longTouch event of AddressListItem and deleting the list item, the AddressListItem component needs to be simulated, so when rendering the component, it needs to be divided into simulation and non-simulation, controlled by the stubs parameter, the default is false .

In addition, we also define a spy function that passes in the createSpy option, because the store.getAddressInfoList()address , which means we must simulate this action before starting to render the component, just create a new createSpy function achieve this goal. In the spy function, we rewrite all the actions and let them all be updated address.addressInfoList. Because the test scenario is relatively simple, there will be no major problems in doing so. When we need to implement different actions before the component is created, the spy function can be used as a parameter incoming.

If the component does not call action immediately before and after creation, we don't need to rewrite createSpy, just modify it after mounting, such as this test case:

test('监听到 Item 组件的 longTouch 事件后弹出弹窗,点击确定即可删除该 Item', async () => {mockedAddressInfoList.splice(0, 2)const { findAllByTestId, queryAllByTestId } = renderAddressList(true)const address = useAddressStore()vi.mocked(address.deleteAddress).mockImplementation(vi.fn(async () => {address.addressInfoList = []}))expect(await findAllByTestId('item')).toHaveLength(1)await fireEvent.click(screen.getByText('确认'))expect(address.deleteAddress).toHaveBeenCalledWith('3')expect(queryAllByTestId('item')).toHaveLength(0)
}) 

Here the implementation address.deleteAddressof .

testAddressForm

Testing the AddressForm There are two things to note here.

One is to set the initial getter. Although createTestingPinia only supports initializing the state, it is not difficult to initialize the getter, because the getter itself is calculated from the state, so it is enough to directly set the initial state.

The second is to override the implementation of the module's mock function inside a test case, such as this test:

test('正确填写表单并提交成功后,1 秒后调用 router.back()', async () => {const back = vi.fn()vi.mocked(useRouter, {partial: true,}).mockImplementation(() => ({back,}))const { getByPlaceholderText, getByText, getByRole, getByTestId } = renderAddressForm()expect(back).not.toHaveBeenCalled()await fireEvent.update(getByPlaceholderText('请填写收货人姓名'), addressInfo.name)await fireEvent.update(getByPlaceholderText('手机号码'), addressInfo.mobilePhone)await fireEvent.click(getByPlaceholderText('点击选择省市区'))await fireEvent.click(screen.getByText('确认'))await fireEvent.update(getByPlaceholderText('详细地址'), addressInfo.detailAddress)await fireEvent.click(getByText('家'))await fireEvent.click(getByRole('switch'))await fireEvent.submit(getByTestId('form'))vi.advanceTimersByTime(1000)expect(back).toHaveBeenCalledTimes(1)
}) 

What needs to be noted is that the vi.mocked()call needs to additionally pass in a partial field with a value of true, indicating that only part of the API of the module is simulated.

Test Pinia stores

In addition to testing components, we also need to test the store, because the store usually manages the state of one or more business modules, is responsible for the scheduling and maintenance of the data layer at the module level, and is an important part of a Web application, so testing them is A very important part of automated testing.

It is very simple to test the store in Pinia, because it is essentially unit testing each getter and action, and the granularity is much smaller than that of components. The only thing to pay attention to is to remember to add this piece of code:

beforeEach(() => {setActivePinia(createPinia())
}) 

Because if you want to use the store, you need to have a registered pinia instance, otherwise you need to manually pass it into useAddressStore()the method , the above code can help us do this automatically.

After completing the above, the remaining things are much simpler, and there is nothing to introduce. It is enough for everyone to directly look at the warehouse code.


This is the end of the actual combat of component testing for front-end automated testing. I have focused on the testing principles, testing skills and precautions for component testing. If you understand and become proficient, you will find that writing tests is actually not difficult. Still make a fuss about the input and output of component functions, and organize the test code according to four steps, and the rest is the proficiency of various APIs.

at last

A front-end information package is prepared for everyone. Contains 54, 2.57G front-end related e-books, "Front-end Interview Collection (with answers and analysis)", difficult and key knowledge video tutorials (full set).



Friends in need, you can click the card below to receive and share for free

Guess you like

Origin blog.csdn.net/qq_53225741/article/details/129307477