Agile technical practice

Compared with the practices of most programmers over the past 70 years, the practices described in this chapter are fundamentally different. They force a lot of minute-level or even second-level, profound, ritualistic behaviors, so that most programmers will find it absurd when they first encounter it. So many programmers try to get rid of these practices when doing agile. However, they failed because these practices are at the core of Agile. Agile without test-driven development, refactoring, simple design, and pair programming is just a mere mere manifestation of nothing.

5.1 Test Driven Development

Test-driven development is a sufficiently complex topic that requires a whole book to cover it. This chapter is just an overview, mainly discussing the reasons and motivations for using this practice, and will not discuss in depth on the technical aspects. In particular, no code will appear in this chapter.

The programmer is a unique profession. We have produced a large number of documents, which contain esoteric technical mysterious symbols. Every symbol in the document must be correct, otherwise very terrible things will happen. An incorrect symbol may cause loss of property and life. What other industries are like this?

accounting. Accountants also produced a large number of documents, which also contained profound technical mystery symbols. And every symbol in the document must be correct, otherwise it may cause loss of property or even life. So how do accountants ensure that every symbol is correct?

5.1.1 Double-entry bookkeeping

Accountants invented a rule 1,000 years ago and called it double-entry bookkeeping. Each transaction is written to the ledger twice: a credit is credited to one set of accounts, and then debited accordingly to another set of accounts. These accounts are finally aggregated into a balance sheet document, using total assets minus total liabilities and equity. The difference must be zero. If it is not zero, an error must be made.

From the beginning of learning, accountants are taught to record transactions one by one and balance the balance immediately after each transaction is recorded. This allows them to find errors quickly. They are taught to avoid recording multiple transactions between two balance checks because that would make it difficult to locate errors. This approach is so crucial to the correct accounting of funds that it has basically become a law all over the world.

Test-driven development is the corresponding practice of programmers. Each necessary action is entered twice: once as a test, and another as the production code to pass the test. The two inputs complement each other, just as liabilities and assets complement each other. When the test code is executed together with the production code, the output of the two inputs is zero: the number of failed tests is zero.

Programmers who learn TDD are taught to add only one behavior at a time—first write a failed test, and then write production code that happens to pass the test. This allows them to find errors quickly. They were taught to avoid writing a lot of production code and then adding a lot of tests, because this would make it difficult to locate errors.

The two disciplines of double-entry bookkeeping and TDD are equivalent. They all have the same function: avoid mistakes in extremely important documents and make sure that every symbol is correct. Although programming is essential to society, we have not enforced TDD by law. However, since poorly written software has caused loss of life and property, can the legislation be far behind?

5.1.2 TDD three rules

TDD can be described as the following 3 simple rules.

  • Write a test that fails due to lack of production code before writing production code.
  • It is only allowed to write a test that just fails-even if the compilation fails.
  • It is only allowed to write production code that just passes the current failed test.

Programmers with a little programming experience might think these rules are too outrageous, and they are almost stupid. They mean that the programming cycle may be only 5 seconds. The programmer first writes some test codes for the non-existent production code. These tests almost immediately fail to compile because they call elements in the non-existent production code. The programmer must stop writing tests and start writing production code. But after a few keystrokes, the test that failed to compile now compiles successfully. This forces the programmer to go back and continue adding tests.

Switching between test and production code every few seconds, this cycle will form a strong constraint on the programmer's behavior. Programmers can no longer write a complete function at once, or even a simple if statement or while loop. They must write test code that "just fails", which will interrupt their urge to "write it all at once."

Most programmers initially think this will disrupt their thinking. The three rules keep interrupting them, preventing them from thinking about the code they want to write. They often feel that the Three Rules have brought intolerable harassment. However, suppose there is a group of programmers who follow the three rules. Pick a programmer at random, and all the work of the programmer will be executed and passed all tests one minute before. No matter who you choose, everything will be workable one minute ago.

5.1.3 Debug

What does it mean that everything was always working one minute ago? How much debugging work is needed? If everything worked a minute ago, then almost any failure you encountered has not been more than a minute. Debugging a fault introduced in the last minute is usually a trivial matter, and there is no need to use a debugger to find the problem. Are you familiar with using a debugger? Do you remember the shortcut keys of the debugger? Can you automatically hit shortcut keys to set breakpoints, single-step debugging, jump in and out based on muscle memory? When debugging, do you feel that you are being devastated? This is not a desirable skill.

The only way to master the debugger is to spend a lot of time debugging. Spending a lot of time debugging means that there are always many errors. Test-driven developers are not good at operating debuggers, because they don't use debuggers often, and even if they do, they usually spend very little time.

I don't want to make the wrong impression. Even the best test-driven developers still encounter tricky bugs. After all, we are developing software, and software development is still difficult. But by practicing the three rules of TDD, the incidence and severity of bugs can be greatly reduced.

5.1.4 Documentation

Have you integrated third-party software packages? It may come from a zip file that contains source code, DLL, JAR files, etc. There may be a PDF file of integration instructions in the compressed package, and there may be an ugly appendix at the end of the PDF, which contains all the code examples.

In such a document, what do you read first? If you are a programmer, you may immediately skip to the end to read the code examples, because the code can tell you the truth.

When you follow the three rules, the test you write will eventually become a code example of the entire system. If you want to know how to call an API function, the test set has called the function in various ways and caught every exception it might raise. If you want to know how to create an object, the test set has created the object in various ways. Testing is a form of documentation describing the system under test. This document is written in a language proficient in programmers. It is unambiguous, it is a strictly executable program and is always in sync with the application code. The test is the perfect documentation for the programmer: it is the code itself.

Moreover, the test itself cannot be combined into a system. These tests do not understand each other, nor do they depend on each other. Each test is a small independent code unit that describes the way a small part of the system behaves.

5.1.5 Fun

If you have written a test afterwards, you should know that it is not fun. Because you already know that the code works, you have tested it manually. You have to write these tests only because someone asked you to do it. This adds a lot of work to you and is boring.

This process becomes very interesting when you follow the three rules and write the test first. Each new test is a challenge. Every time you pass a test, you win a small success. Follow the three rules, and your job will become a series of small challenges and small successes. This process is no longer boring, it gives people a sense of accomplishment in reaching their goals.

5.1.6 Completeness

Now let's return to the way of writing up the test after the fact. Even though you have manually tested the system and already know it will work, you are still forced to write these tests. Not surprisingly, every test you write will pass. You will inevitably encounter tests that are difficult to write. It's difficult because you didn't consider testability when writing code, and you didn't design the code to be testable. You must first modify the code structure to write tests, including breaking coupling, adding abstraction, and swapping some function calls and parameters. This feels very laborious, especially if you already know that the code can work.

The schedule is very tight, and you have more urgent things to do. Therefore, you put the test aside. You convince yourself that the test is not necessary, or you can write it later. So you left a loophole in the test suite. You have left a bug in the test suite, and you suspect that others will do the same. When you execute the test suite and pass the test, you smile and wave your hands lightly, because you know that passing the test suite does not mean that the system can work normally.

When such a test suite passes, you will not be able to make a decision. The only information given after the test is passed is that none of the tested functions have been broken. Because the test suite is incomplete, it cannot give you effective decision support. However, if you follow the three rules, each line of production code is written to pass the test. Therefore, the test suite is very complete. When it passes, you can decide: the system can be deployed.

This is the goal. We want to create a set of automated tests to tell us that the deployment system is safe and reliable.

Again, I don't want to create an illusion. Following the three rules can provide you with a very complete test suite, but it may not be 100% complete. This is because the rule of three is not practical in some situations. These situations are beyond the scope of this book. It can only be said that they are limited in number and there are some solutions to solve them. The simple conclusion is that even if you follow the three rules with great diligence, you are unlikely to produce a 100% complete test suite. But it doesn't have to be 100% complete testing to decide the deployment. More than 90%, close to 100% coverage is sufficient, and this degree of completeness is absolutely achievable.

I have created a very complete test suite, based on this test suite I can safely make deployment decisions. I have seen many others do the same. Although the completeness has not reached 100%, it is already high enough to decide to deploy.

caveat

Test coverage is an indicator of the team, not a management indicator. Managers are unlikely to understand the actual meaning of this indicator. Managers should not use this indicator as a goal. The team should only use it to observe whether the test strategy is reasonable.

Warn again

Don't fail the build because of insufficient coverage. If you do this, the programmer will be forced to remove the assertion from the test to achieve high coverage. Code coverage is a complex topic that can only be understood if you have a deep understanding of the code and testing. Don't let it be an indicator of management.

Remember those functions that are difficult to test afterwards? The difficulty is because it is coupled with other behaviors, and you don't want to perform those behaviors in the test. For example, the function you want to test might open the X-ray machine or delete a few rows from the database. It is difficult because you did not design the function to be easy to test. You write the code first, and then think about how to write the test afterwards. When you write code, testability is probably the least thing that will come to your mind. Now you are faced with the situation of redesigning the code to facilitate testing. You look at your watch and realize that it has taken too long to write the test. Since you have performed manual tests, you know that the code works, so you give up, leaving a loophole in the test suite.

However, if you write the test first, things are quite different. You cannot write a function that is difficult to test. Since you have to write the test first, you naturally design the function under test to be easy to test. How to keep the function easy to test? Decoupling. In fact, testability is synonymous with decoupling.

By writing the test first, you will decouple the system in a way you never thought of before. The entire system will be testable, so the entire system will also be decoupled.

Because of this, TDD is often referred to as a design technique. The three rules force you to achieve a higher degree of decoupling.

5.1.8 Courage

So far we have seen that following the three rules can bring many powerful benefits: less debugging, high-quality detailed documentation, interesting and complete testing, and decoupling. However, these are just side benefits, not the real motivation for practicing TDD. The real reason is courage.

I have said it in the story at the beginning of this book, but it is worth repeating it again.

Imagine you are looking at some old code in front of a computer screen. Your first thought was: "This code is too badly written, I should clean it up." But the next thought is: "I don't want to touch it!" Because you know, if you touch this code, you will Broken the software, and this code becomes your code. So you dodge that piece of code and let it fester.

This is a reaction of fear. You fear the code, you are afraid of touching it, you are afraid of the consequences if it is changed. So, you failed to improve the code, you failed to clean it up.

If everyone on the team behaves like this, the code must rot. No one will clean it up. No one will improve it. Every time a new feature is added, the programmer tries to minimize the risk of "immediate error". For this reason, they introduced coupling and repetition, even though they clearly knew that coupling and repetition would destroy the design and quality of the code.

In the end, the code will become a terrible, unmaintainable spaghetti, and it is almost impossible to do any new development on it. Estimates will increase exponentially. Managers will become desperate. They will recruit more and more programmers, hoping that their participation will increase productivity, but this will never happen. In the end, the managers agreed to the programmer's request in despair, that is, to rewrite the entire system from scratch and start the cycle again.

Imagine a different situation. Back to the screen full of messy code. Your first thought is to clean it up. If you have a complete test suite and you can trust it when the test passes, what will happen? If the test suite runs fast, what will be the result? What are your next thoughts? It should be like this:

Gosh, I should rename that variable. Ah, the test passed. Okay, now I split that big function into two smaller functions...beautiful, the test still passes...ok, now I want to move a new function to another class. Damn! The test failed. Quickly put this function back...Ah, I see, that variable also needs to be moved along with it. Yes, the test still passes...

When you have a complete test suite, you will no longer fear to modify the code, and no longer fear to clean up the code. Therefore, you will clean up the code. You will keep the system clean and orderly. You will keep the system design intact. You no longer make disgusting spaghetti, no longer throw the team into a downturn of low productivity and eventual failure. This is why we practice TDD. We practice it because it gives us the courage to keep the code clean and orderly. It gave us courage and allowed us to act like a professional.

5.2 Refactoring

Refactoring is another topic that needs a whole book to describe. Fortunately, Martin Fowler has finished this wonderful book. In this chapter, I only discuss discipline, not specific techniques. Again, this chapter does not contain any code. Refactoring is the practice of improving the structure of the code without changing the behavior defined by the test. In other words, we modify the names, classes, functions, and expressions without breaking any tests. We improve the structure of the system without affecting behavior.

Of course, this approach is closely related to TDD. We need a set of test suites to refactor the code without fear. The test suites can make us not worry about breaking anything at all. From subtle beautification to in-depth structural adjustments, there are many types of modifications during the reconstruction. The modification may be a simple renaming, or it may be a complex reorganization of switch statements into polymorphic distribution. Large functions are broken into smaller, better-named functions. The parameter list is converted to an object. A class that contains many methods is split into multiple subclasses. The function is moved from one class to another. Classes are extracted as subclasses or inner classes. Dependencies are inverted and modules move back and forth between architectural boundaries.

And, during all these modifications, the test always remains in a passing state.

5.2.1 Red-Green-Refactor

Combining the reconstruction process on the basis of the three TDD rules is the well-known "red-green-refactoring" cycle (as shown in Figure 5-1)

(1) Create a failed test.
(2) Make the test pass.
(3) Clean up the code.
(4) Return to step 1.

I think that writing usable code and writing clean code are two different dimensions of programming. Trying to control these two dimensions at the same time is difficult and may not be achieved, so we decompose these two dimensions into two different activities.

In other words, it's hard to make the code work, let alone make it clean. Therefore, we first focused on getting the code to work with crude ideas. Then, once the code works and passes the test, we begin to clean up that mess of messy code.

This clearly shows that refactoring is a continuous process, not a process that is performed regularly. We will not leave a mess of messy code, and then try to clean it up after many days. On the contrary, we created a very small mess in a minute or two, and then immediately cleared the small mess. The term refactoring should never appear on the timetable. Refactoring activities should not appear in the project plan. We do not reserve time for reconstruction. Refactoring is an integral part of our software development activities every minute and hour.

5.2.2 Large refactoring

Sometimes, changes in the requirements of this situation will make you realize that the current design and architecture of the system is not optimal, so major changes to the structure of the system are required. This modification is also included in the red-green-reconstruction cycle. We will not specifically build a project to modify the design. We will not reserve time in the schedule for such large refactorings. Instead, we migrated the code small steps at a time while continuing to add new features according to the normal agile cycle.

Such design modifications may take days, weeks, or even months. During this period, even if the design transformation is not fully completed, the system will continue to pass all tests and can be deployed in a production environment.

5.3 Simple design

Simple design practice is one of the goals of refactoring. Simple design means: only write the necessary code to keep the program structure the simplest, smallest and most expressive. Kent Baker’s simple design rules are as follows.
(1) All tests passed.
(2) Reveal the schematic diagram.
(3) Eliminate duplication.
(4) Reduce elements.

The sequence number is both the order of execution and the priority.

The first point is self-evident. The code must pass all tests. The code must work.

Point 2 points out that after the code is working, it should also be expressive. It should reveal the programmer's intentions, and it should be easy to read and self-express. In this step, we use a variety of relatively simple reconstruction methods that focus on modification. We also split large functions into smaller, better-named functions.

Point 3 points out that after making the code as descriptive and expressive as possible, we will find and eliminate all duplicate content in the code. We don't want one thing to be repeated several times in the code. During the event, refactoring is usually more complicated. Sometimes eliminating duplication is as simple as moving the duplicate code into a function and calling it in many places. In other cases, refactoring requires more interesting solutions, such as some design patterns 1: Template Method, Strategy, Decorator, or Visitor.

Point 4 points out that once all duplicates are eliminated, we should strive to reduce structural elements such as classes, functions, variables, etc. The goal of simple design is to minimize the design weight of the code whenever possible.

Designed weight

The design of the software system can be very simple or extremely complex. The more complex the design, the greater the cognitive burden on the programmer. Cognitive burden is the weight of design. The heavier the design, the more time and effort it takes for the programmer to understand and manipulate the system.

Similarly, requirements have different levels of complexity, some are not too complex, and some are very complex. The greater the complexity of the requirements, the more time and effort it will take to understand and control the system.

However, these two factors are not superimposed. By adopting more complex designs, complex requirements can be simplified. Usually, this trade-off is cost-effective: choosing an appropriate design for existing functions can reduce the overall complexity of the system.

Achieving a balance between design and functional complexity is the goal of "simple design". Through this practice, programmers can continuously refactor the design of the system to keep it in balance with requirements, thereby maximizing productivity.

5.4 Pair programming

Over the years, the practice of pair programming has caused a lot of controversy and misunderstandings. Two people (or more people) can solve the same problem together, and it is quite effective-many people sneer at this concept. First, pairing is optional. Don't force anyone to pair up. Second, the pairing is intermittent. There are many good reasons for writing code alone. The team should be paired about 50% of the time. This number is not important, it may be as low as 30% or as high as 80%. In most cases, this is the choice of individuals and teams.

5.4.1 What is pairing

Pairing is when two people work together to solve the same programming problem. Paired partners can work together on the same computer, sharing the screen, keyboard and mouse. Or they can work on two connected computers, as long as they can see and manipulate the same code. The latter option can work well with popular screen sharing software, so that partners who are not in the same place can also pair programming, as long as both parties have good data and voice connections. Paired programmers sometimes play different roles. One of them may be the "driver" and the other the "navigator".

The "driver" holds the keyboard and mouse while the "navigator" looks at the six roads and makes suggestions. Another way of cooperating is: one person writes a test first, another code makes the test pass, then writes the next test, and returns it to the first programmer for implementation. Sometimes this pairing method is called Ping-Pong. However, more often, there is no clear role division when pairing. The two programmers are equal authors, sharing the mouse and keyboard in a cooperative manner.

Pairing does not need to be arranged in advance, forming or disbanding partners according to the programmer's preferences. Managers should not try to use tools such as a pairing schedule or a pairing matrix to force pairing. Pairing is usually short-lived. A pairing may last up to one day, but more commonly, it does not exceed one or two hours. Even pairs as short as 15-30 minutes are beneficial. The story is not assigned to paired partners. A single programmer (not a pair of partners) is responsible for the completion of the story. The time required to complete the story is usually longer than the pairing time. In a week, half of each programmer's pairing time is spent on their own tasks and get help from their pairing partners; the other half of the pairing time is spent helping others complete their tasks.

For experienced programmers, the number of pairs with beginners should exceed the number of pairs with other experienced programmers. Similarly, for beginners, you should ask for help from experienced programmers more than other beginners. Programmers with special skills should often work in pairs with programmers who do not have the skills. The goal of the team is to spread and exchange knowledge, not to concentrate knowledge in the hands of a few people.

5.4.2 Why pair

By pairing, we can behave like a team. Team members cannot work in isolation from each other. Instead, they collaborate in seconds. When a team member falls, other team members will cover the loopholes left by him and continue to advance toward the goal.

So far, pairing is the best way to share knowledge between team members and prevent the formation of knowledge islands. To ensure that no one on the team is indispensable, pairing is the best way. Many teams report that pairing can reduce errors and improve design quality. In most cases, this should be true. Usually, it is best to have more than one person paying attention to the problem to be solved. In fact, many teams have replaced code reviews with pairing.

5.4.3 Pairing as a code review
Pairing is a form of code review, but it is much superior to the general code review method. The two paired together are co-authors during the pairing. Of course, they will read and review the old code, but their real purpose is to write the new code. Therefore, reviews are not just static checks to ensure that the team’s coding standards are applied. Rather, it is a dynamic review of the current state of the code, focusing on where the code will go in the near future.

5.4.4 Cost geometry

The cost of pairing is difficult to measure. The most direct price is that the two deal with a problem together. Obviously, this will not double the workload of solving the problem; however, it may indeed require some price. Various studies indicate that the direct cost may be about 15%. In other words, when working in pairs, 115 programmers are required to complete the workload of 100 people without pairing (not including code reviews).

A rough calculation shows that a team with 50% pairing time pays less than 8% in terms of productivity. In addition, if pairing practices replace code reviews, it is likely that productivity will not decrease at all. Then, we must consider the benefits of cross-training for knowledge exchange and close cooperation. These benefits are not easy to quantify, but they can be very important.

My experience with many others is that if no formal requirements are made, and the programmers decide for themselves, pairing is very beneficial to the entire team.

5.4.5 Is there only two people?

The term "pairing" implies that only two programmers are involved in a pairing. Although this is usually the case, this is not a dead rule. Sometimes 3, 4 or more people decide to solve a problem together. (Again, this is determined by the programmer.) This form is sometimes referred to as "mob programming".

5.4.6 Management

Programmers often worry that managers will dislike pairing, and may even ask to stop pairing and stop wasting time. I have never seen this before. In the half-century of writing code, I have never seen managers intervene in such detail. Usually in my experience, managers are happy to see programmers collaborating and cooperating, which gives the impression that work is progressing.

However, if you, as a manager, tend to intervene because you are worried about the inefficient pairing, then please rest assured and let the programmers solve this problem by themselves. After all, they are experts. If you are a programmer and your manager tells you to stop pairing, please remind the manager: you are the expert, so you must be responsible for your own way of working, not by the manager.

Finally, never, never ask the manager to allow you to pair, or test, or refactor, or... you are an expert. You decide.

5.5 Conclusion

Agile technical practice is the most essential part of any agile job. Any attempt to introduce agile practices without technical practices is doomed to fail. The reason is simple. Agile is an effective mechanism that can make people create chaos in a hurry. If there is no technical practice to maintain high technical quality, the productivity of the team will soon be hindered and fall into an inevitable spiral of death.

This article is excerpted from "Agile and Clean Way Back to Origin"

1. Review the history of agile, agile restatement originally intended to explain the essence of agile; 
2. clarification it has long been misunderstood and confused agile, so quickly return to the right path; 
3. root of the problem, the software industry practitioners about agile Values ​​and principles; 
4. The Agile Manifesto puts forward practical answers to key questions facing agile developers 20 years later. 

Nearly 20 years after the signing of the "Agile Manifesto," Robert C. Martin ("Uncle Bob"), a legend in the software development industry, has returned to the world and is a new generation of software industry practitioners-whether programmers or non-programmers- Describe the values ​​and principles of agile. Martin is the author of "The Way to Clean Code" and other extremely influential software development guide books, and he is also one of the original founders of Agile. Now, in this book, he has clarified the misunderstanding and confusion about Agile for a long time and restated the original intention of Agile. 

Martin clearly stated the essence of agile: Although agile is a small method to help small teams operate small projects, it has a huge impact on the entire IT industry, because any large project is composed of several small projects. He put his 50 years of experience in the industry into plain text, showing how agile can help practitioners in the software industry achieve true professional standards. 

Guess you like

Origin blog.csdn.net/epubit17/article/details/107830106