Let's talk about the test.

1) Data self-contained.
   Each test case is responsible for its own test data.

2) Test environment/target product is independent.
    It should be assumed that the desired test environment is independent. You can manually or automatically create a test environment before testing.
    When running the test code, configure these relevant environment data into it.
    The object under test should also be independent. If the test data needs to be deployed to the test object, it should be injected after the object is created, instead of constructing the test data in the test object at the same time when the test object is constructed. This creates a larger coupling.

3) Self explanatory.
   The purpose of the test should be clearly written in the test code, with a brief tese case description. In this way, a program (such as javadoc) can be used to generate a brief case doc corresponding to the test requirment. The
   test code is written into a tag (a hierarchical tag), which indicates the classification of this use case.

4) TestResult can be traced back.
   The test framework should be able to easily obtain the description of the test case (generated by the self-explanatory part) from the exception and failed markers.

5) Good format report.
   Not only aesthetically beautiful, but also logically beautiful - easy to locate problems.

6) In order to improve the test efficiency. Consider executing unrelated cases in parallel. This is a topic for advanced idlers..., generally not needed for small projects.

If you are not afraid of suffering, the above can be ignored...


Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=327088699&siteId=291194637