Some suggestions for avoiding pitfalls for junior test engineers

Most developers I meet are not very keen on testing. Some will test, but most don't test, are unwilling to test, or are reluctant to do so. I love testing and happily spend more time testing than writing new code. I think that because of the focus on testing, I can spend less time writing new code or fixing bugs and be very productive. If you are not sure whether to write tests or do not write tests often, then the following content will guide you in a better direction.

1. No testing
We can easily fall into this trap for no reason. From now on, make a plan to add tests to the code you're working on now, and to add tests to future projects.

2. Not starting the test from the beginning of the project
It is difficult to go back and add tests, and may need to change the architecture to add tests, which will eventually take you longer to produce reliable code. Adding tests during the project's lifecycle from the start saves time and effort.

3. Writing Failed Tests
The popularity of the TDD approach brought the concept of red-green-refactoring to the software testing world. This philosophy is often misunderstood as saying that you should "start by writing a test that fails". Not so. The purpose of creating tests before writing code is to define what the correct behavior of the system should be. In many cases it is a failing test (indicated in red), but it may be indicated by a passing an inconclusive or unimplemented test.

4. Worry about not implementing tests
A big problem in software development is the gap between the code and any documentation of what the system is actually supposed to do. By having a test with a name that clearly defines the expected behavior you ultimately want to achieve, you will get some value from the test, even if it is not yet known how the test will be written.

5. Not naming tests well
Naming software is notoriously difficult to do well, and the same applies to tests. There are several popular conventions on how to name tests. It doesn't matter which one you use, as long as you use it consistently and describe exactly what you're testing.

6. Let the test do too many things
. Long and complicated names usually indicate that you want to test multiple things at the same time. A single test should only test one thing. If it fails, it should also indicate what went wrong in the code. You don't have to look at which part of the test is failing in order to know what's wrong in the code. This doesn't mean you shouldn't have multiple assertions in your tests, but those assertions should be closely related. For example, a test that looks at the output of an order processing system and confirms that there is a single item in the output and that it contains specific items is ok. But a test that verifies the output of the same system, both creating a specific item, logging to a database, and sending a confirmation email, won't work.

7. Not actually testing the code
It's not uncommon to see novice testers creating overly complex mockups and setup procedures that don't actually test the code. They might verify that the mock code is correct, or that the mock code does the same thing as the real code, or just execute the code without any assertions. Such "tests" are a waste of effort, especially if they exist only to improve code coverage levels.

8. Worry about code coverage
Code coverage is a lofty idea, but often of limited practical value. Knowing how much code is being executed when running tests should be useful, but since it doesn't take into account the quality of the tests that are executing the code, it is therefore meaningless. Code coverage is quite eye-catching when it is very high or very low. If it is very high, it may indicate that too much code may be being tested than it is bringing value. Very low code coverage indicates that there may be insufficient testing of the code. Because of this ambiguity, some people don't know whether a single piece of code should be tested or not. I make this clear with a simple question: Does the code contain significant complexity? If so, then you need some testing. If not, you don't need to. Testing property accessors is nothing but a waste of time. If they fail, there is something more fundamentally wrong with your code system than the code you are writing. If you know everything instantly without looking at a piece of code, then it doesn't matter. This applies not only to code, but also to your writing code. If we revisit code at any point, then it requires testing. If a bug is found in existing code, it means that this piece of code has not been adequately tested for its complexity.

9. Focus on one style of testing
Once you start testing, it's easy to get stuck in just one style of testing. This is a mistake. You can't adequately test all parts of a system with just one type of test. You need unit tests to confirm that the various components of the code work correctly. You need integration tests to confirm that the different components work together. You need automated UI tests to verify that software works as expected. Finally, you need manual testing for any parts and exploratory attempts that cannot be easily automated.

10. Focus on short-term testing
Most of the value from testing will be gained over time. Tests shouldn't just exist to confirm things are written correctly, but should continue to work over time and make other changes to the codebase. There are regression bugs or new exceptions, then tests should be run repeatedly to catch problems early, which will mean bugs and exceptions can be fixed faster, cheaper and easier. Testing that can be executed automatically and quickly without variation (human error) is why coding testing is so valuable.

11. As a developer, rely on others to run (or write) the tests
If they don't run, the tests are of little value. If the tests cannot be run, then bugs may be missed. Running tests automatically (as part of a continuous integration system) is a start, but anyone on the project should be able to run tests at any time. If special settings, machines, permissions, or configurations are required to run the tests, then these become barriers to the execution of the tests. Developers need to be able to run tests before checking the code, so they need to have access and have the power to run all relevant tests. Code and tests should stay in the same place, and any setup required should be scripted. The worst example I've seen of this is a poorly done project where a sub-team of testers periodically took copies of the code that the developers were working on, and they modified the code so they could execute a series of tests , but these tests are inaccessible to the developers on specially configured (undocumented) machines, and the testers then send a large email to all developers explaining the problems they found. Not only is this a bad way to test, but it's also a bad way to work as a team. Do not do this. Code that executes correctly is part of the attribute of a professional developer. The way to ensure the accuracy of the code is to use appropriate tests along with it. Relying on other people to write tests and run tests for the code you wrote will not help you become a professional developer.

If none of the above applies to you, then congratulations! Continue to develop robust and valuable software.

If any of the above does happen to you, then it's time to make some changes.

 Friends who are doing the test can come in and communicate. The group has compiled a lot of learning materials and interview questions, project resumes, etc....

Guess you like

Origin blog.csdn.net/2301_76643199/article/details/131287898