Programmer Metrics: Improving Analytics for Software Teams

Preface to Programmer Metrics: Analytics for Improving Software Teams

Let's not be too convinced that we haven't missed something important.

—Bill James (baseball statistician and author), from "Underestimating the Fog"

Programmer Metrics: Analytics for Improving Software Teams is a book about metrics and patterns for programmers, software development teams. Some of the ideas for this book stem from thinking about the makeup of software development teams that I started many years ago: for better or worse, all the little contributions and the hard work of unsung heroes are key components of a project's success. For nearly two decades, I have been responsible for the formation and management of teams of designers, programmers, and testers. Over the years, I've realized that a software development team, like a team, needs players in a variety of roles and professionals with different skills to be successful. I also recognize that the pattern of success and failure is not necessarily as simple as I thought.

I've seen a simple pattern, and maybe you've seen it too: in every successful software development team I've been on , there has always been at least one colleague who has no regrets doing chores like creating installers, improving Compile scripts, or fix some other people's mistakes to help the team implement product features. If there is no one on the team to do these trivial things, those projects will always go unfinished, or at least not done well enough.

Another pattern is this: I've seen many experienced software development teams in which there are typically one or two programmers who serve as the clear technical lead and key person, although they don't necessarily have the corresponding title. These key programmers not only solve problems, but they have a powerful influence on others, such as other programmers whose skills are rapidly developing, getting closer and closer to the level of technical leaders. The end result is that one or two great people raise the level of the entire team.

Here is another pattern that I observed in a long-term project that I have personally participated in. This pattern is especially often found in small teams in the entrepreneurial stage : when the project progresses to 80%, the project team often "hit the wall" ". Like a marathon runner running to the 20-mile mark, after months of hard work on the project team, everyone is exhausted physically and mentally. Sometimes when the team gets stuck, we get stuck and can't come back to life. The remaining 20% ​​of the project seemed to never get done, and in the end, we basically stumbled toward the finish line. But sometimes certain teams can get through that wall, come back to life, and get their pace right again. In any case, being able to come back to life comes from the good qualities of some people on the team who can lighten the team's workload, create a relaxed work atmosphere, boost team morale, and make everyone feel good. Thanks to the joking guys on the team who got everyone on the team back (mostly) positive and ready to sprint to the finish line.

Once we see these, the patterns of success seem obvious, but to see them, we must learn how to do it. When I started thinking about this, I wondered if we could establish a set of metrics that would give us a clear, objective way to identify, analyze, and discuss the successes and failures of software development teams and a holistic view of programmer skills and contribution. It's not just a way of judging performance, it's a key factor that helps us better understand and achieve success, and it shows where and how to improve. I have tried a few on my own team with excellent results. Encouragingly, these methods work for others as well.

This book is my attempt to analyze these ideas and practices. In this regard, there is very little material on software development team metrics—whether written or otherwise. We have extensive books on interviewing, skills testing, project estimating, project management, and team management, as well as books on agile and other methodologies for more efficient development processes. However, we have never discussed or explored a quantitative analysis method that improves the effectiveness of software development teams by understanding the skills and work of individual programmers.

The metric used by the vast majority of software development teams today is typically a simple collection of counts in the project estimation or project management process. We measure using number of bugs, number of tasks, time increments (hours/days/weeks), and story points and velocity in agile teams. There are also some more sophisticated systems and tools in project estimation, such as scale measurements using data such as thousand lines of code (KLOC) and function points.

But our commonly used metrics do not provide enough depth to answer many of the key questions we face, such as:

  • How good can our software development team be?
  • What kind of team members can contribute to the team's success?
  • Which ability improvement will help the team to be more successful?

If we fail to answer these seemingly simple yet profound questions well, or lack a clear way to discuss and think about the answers to these questions, then we are not doing everything we can to be successful as individuals and as team members. Of course, we have to get to the bottom of what success really is and how to measure the success of a software development team, rather than taking it for granted that these can be adequately addressed, when in fact these problems exist. In what follows, I will try to suggest some new and different ways to help us better understand these questions and get possible answers.

I'm a sports fan, so in many places in this book I choose to use sports as an analogy. However, this does not mean that you need to like or understand sports in order to understand the concepts in this book. Like all analogies, its purpose is simply to help us understand and remember concepts better. Personally, I find it appropriate and enjoyable to use the sports analogy to discuss software development teams.

I think of a software development team as a team. Often, software products are developed through teams rather than a single person, and while there are examples of a single programmer doing the work alone, that programmer alone plays various roles within a team. We know that in sports, successful teams need players to complement each other, and do not need and should not require everyone to have the same skills. In addition to players who are good at running, passing and receiving, the team also needs players who are good at defending and stealing. Not all people are good at doing the same things. In fact, a team in which all players have the same advantage, no matter how strong, will most of the time be inferior to a team with players with different complementary skills. Therefore, the team can only be successful if every player on the team does their job well.

The initial idea of ​​measuring programmers by using statistical analysis came from organized quantitative analysis of physical activity. Computers and software have brought about dramatic changes in the analysis of professional team players' statistics and helping them determine which player skills will most directly help teams win. Bill James and other record analysts have established a discipline around the statistical analysis of baseball players called "cybermetrics."

Pioneers of applying these new methods to team management have received extensive training in data analysis, such as Daryl Morey (general manager of the NBA Houston Rockets) majoring in computer science at Northwestern University, Paul DePodesta (MLB) Vice chairman of the New York Mets and former Los Angeles Dodgers general manager) majored in economics at Harvard. This new approach to sports is often seen as a strain and migration as opposed to the predominately subjective and intuitive judgment-based approach to talent assessment and team building. Most teams are now big corporations with a lot of money. In this new era, team managers spend more time collecting and analyzing metrics to help build winning teams in more rational and predictable ways (as described in Moneyball, with more effective cost-effective and profitable way). Metrics are not meant to replace personal intuition and creativity, but to help us improve our understanding. Key steps followed in this new approach include:

  • Discover ways to measure the difference between winning and losing teams.
  • Discover how to measure the size of an individual player's contribution to the team.
  • Identify the characteristics of the key players that will determine the outcome of your team.

The process of discovering meaningful metrics and formulas in sports is not static, but an ongoing process. It's easy to understand that many important and subtle skills are difficult to measure and analyze, such as a defensive player's ability to spot the instincts of a dribbler, or the ability to play under pressure. For example, some of the new metrics and ideas introduced by Bill James in his serials and yearbooks on baseball were adopted and used, some were improved, and some were less useful and faded away.

As with the public evolution, metrics are also quietly evolving. Sports is a competitive field, so the statistics and formulas that teams actually employ are kept secret. While many analysts are writing publicly, they also work as personal advisors for individual teams. Theo Epstein (general manager of the MLB Red Sox) and Billy Beane (general manager of the MLB Oakland A's) may share some information with each other, and they both benefit from metrics that are well known in the greater community, but in the end they both Trying to outwit each other, therefore, belongs to certain elements of their method that no one outside of their organization can know.

Unlike the competitive pressures of major sports leagues, our software development world is less open and most programmers are out of the public eye. We don't or never have fans following our stats or putting our posters on their walls (kind of scary idea). Ironically, we are in a field that in many ways enables deep statistical analysis of sports (and other industries), but we do not ourselves embrace or fully consider the potential benefits of quantitative analysis in our software development world .

Like other workers, we may be naturally skeptical that a good metric can be found, whether a real-world example exists, and may also be concerned that these statistics will be misapplied by managers to performance appraisals and the like. However, the premise of this book is that there are multiple skills and outcomes in our field that are truly measurable. From there we can gain meaningful and useful insights for ourselves and our team. These numbers are not black and white, and a few individual numbers don't tell the whole story. Knowing Derek Jeter's batting average or Tim Duncan's shooting percentage tells you only a small part of how effective they are as a player or teammate, but when we look at multiple stats, we can Identifying patterns in individuals and teams, sometimes our findings are even unexpected and revealing.

Let me give you an example of a story about a software development team that I managed for many years.

Note: A note on some of the stories in this book: These stories are drawn from my previous work experience, but in many cases the stories have been simplified or outlined to convey the main points.

To protect personal privacy, I will not use my name, including myself.

This example takes place at a VC startup with a team of 6 programmers and 3 testers (this book focuses on programmers, so in this example I'll focus on them). There were three key stages in our first two years of experience: the initial development of the 1.0 release, which took about 9 months; after the 1.0 release, we spent 6 months supporting the first customer and developing 1.1 version; it then took about 9 months to develop the 2.0 release. This team has 3 senior programmers, each with more than 10 years of development experience and excellent domain knowledge, and 3 junior programmers with a good educational background and about two years of commercial software development experience. During these two years, all senior programmers remained on the team, but two of the junior programmers left the team after finishing their first year, and we hired two new programmers.

Our executive committee and investors considered our initial 1.0 release a great success. We won an award at a key industry show and received many positive product reviews. Many middlemen were interested in us, and the number of customer reviews was twice as many as we expected, so that our salespeople were too busy. The on-premise software solution runs in the customer's environment.

That's enough reason to make our software development team feel good, and everyone is complimenting us. But is our 1.0 version really successful?

It took us a while to realize the problem, and by probing the data at the time, we should be able to spot some serious issues. The key and bad fact is this: When we succeeded in gaining visibility and intensifying customer interest, each trial customer made an average of 7 calls for customer support. Although virtually every customer received the installer and installation assistance. Those 7 calls required an average of 3 full days of work with customers to investigate issues, and it turned out that each customer found an average of 3 new bugs in the product that were not previously known. Programmer time spent supporting customer trials (including assisted support time and fixing major product issues) is measured in weeks, not hours or days.

Those seemingly positive gains are also misleading the team. Due to several large deals, we exceeded our early revenue plan, but our overall conversion rate from rater to real customer and the time it took to convert were nowhere near what was required to close a successful business. This is at least partly due to usability and quality issues reflected in the amount of support effort and bugs found.

In other words, while outsiders might think our initial release was a huge success, in fact it was a partial success at best. The data in Figure 1-1 shows how insignificant new users are compared to bugs and support issues.

Figure 1-1: List of key issues revealed by key metrics for the 1.0 release

There is another big problem. Over time, some programmers on the team ran into trouble. With less time spent on exciting new features and more time spent on tedious bug investigations and bug fixes, coupled with the intense support work at the start-up phase, cracks are starting to show between members and within the team. Personality differences are magnified, some programmers slowly start avoiding each other, and even yelling in the workplace often happens.

In the 6 months after the 1.0 release, between the time the team was supporting and developing the 1.1 release, there was chaos, even disaster, within the team, even though people outside the team still thought everything was fine. Most of each programmer's time was spent on bug fixes, and we had to delay most incremental product improvements. Version 1.1 fixes all the serious bugs, there are still many issues left after release, and the load and conversion rate of support work has not changed substantially.

Then, all of a sudden, one day, everything in the team got better. Even though the customer support work rate remained the same, the team began to deal more effectively with issues in the software. Fewer people are involved in each software issue, and more time is freed up for new feature development and major improvements in the problem-focused area. The 1.1 release had barely any feature enhancements, and it took 6 months. The 2.0 release contains many new features and major improvements to the product, and it took only 9 months for a team of the same size. With the 2.0 release, conversion rates and software issue rates have improved significantly, and based on this, we can clearly say that the 2.0 release was a greater success. In the end what happened? Did everyone get used to solving the problem, or did the software problems start to repeat or be less severe? To a certain extent, it does. But the key changes were the departure of two junior programmers and the addition of two new junior programmers.

The two departing programmers left based on their own decisions. While they were happy working on the 1.0 release, they didn't like most of the support work after the release. They are always in the habit of asking for help from other people, especially senior programmers, if they encounter a problem or code that is unclear. Over time, one of the men became more temperamental and became aggressive.

Programmers who join the team are not significantly different from those who leave in terms of educational background, work experience, or talent. The difference is that after the first product is released, two key skills become very important and useful: a strong desire and willingness to solve problems independently, and the ability to deal with emergencies calmly and even happily. Figure 1-2 shows how a substitute can do better than his predecessor.


 

Figure 1-2: Comparison of predecessor (Programmer A) and replacement (Programmer B) demonstrates a key factor in team success

Because new programmers have the right skills, they are able to take on and solve more problems on their own. It's not necessarily that we spend less time on customer support and fixing specific issues, but we can get fewer people involved and make their work less interrupted. This way, other team members can focus on other tasks. Finally, our time has come to work. Because we had some personality conflicts with the two programmers who left, we consciously preferred and selected candidates with different personalities. But we didn't realize how much this benefited our overall productivity and team success.

We didn't pay close attention to our metrics when these things happened. Looking back, I realized how focusing on the team's key metrics helped us react faster and more effectively after the first product release. When people are receiving congratulations from outsiders for good things, it's hard to convince everyone that there is a problem or recognize the importance of the problem. It’s easy to breed complacency in a team, or, conversely, demoralize when it doesn’t get the appreciation it deserves. Focusing on the whole process of product development, team metrics can balance the flattery or criticism you get and provide a new perspective on who you really are and what you're doing. Measurement and discussion around self-reliance and end-to-end can help us develop these skills and ensure that programmers with these skills receive the credit and recognition they deserve for their contributions to the team.

The purpose of this book is to introduce methods and sets of metrics (also known as programmer metrics) that cover a variety of areas relevant to individual developers as well as software development teams. These methods are designed to challenge our assumptions, in the hope that by doing so we can better discover which of the possible patterns leading to success work. To make them easier to understand and remember, the metrics introduced in this book follow the nomenclature of similar statistical metrics in sports. These metrics are meant to provide some terminology for better communication and hopefully we feel that these metrics are useful in our software development world. Finally, its value can be measured by how much they help us answer key questions such as the key questions we face about what it means to "win" and how we can improve ourselves and our team.

It is my hope that the concepts in this book will help develop more productive conversations (intra- and inter-organizational) between programmers, team leaders, and managers. There is no doubt that many of the individual metrics presented here can, or will, be improved; some of these ideas may also be abandoned, or there may be better metrics to be discovered. In my case, I've seen tremendous value in defining a variety of metrics across the team, identifying how to measure individual and team activity metrics and linking them to organizational goals, and then sharing this data within the team with discussion. Even though you may not be comfortable using metrics, I hope you find something valuable in them, and I hope some of the ideas in this book will positively influence your thinking about programmers and software development teams. If someone starts thinking about these concepts, and perhaps can use some of the methods outlined in this book for a broader and deeper rational analysis of programmer contributions and software development team building, I consider this book a success.

It is important to note that many actors and skills in the software development process are outside the scope of this book. This book covers only part of it, because it is difficult to cover all participants and skills in one book, and because I myself have not defined corresponding metrics for other skills. Maybe in the future, we can develop metrics for designers, testers, managers, or other roles, and maybe there will be books on those as well.

-------------------------------------


 Programmer Metrics: Improving Analytics for Software Teams

原作名:Codermetrics: Analytics for Improving Software Teams

Author: Jonathan Alexander

Translators: Zhang Liaoyuan / Zhou Feng / Zhang Gang / Song Lifen

Publisher: Machinery Industry Press

Year of publication: 2013-3

Pricing: 59.00

ISBN:9787111401407

Douban Collection: http://book.douban.com/subject/21365482/

Download sample chapter: http://t.cn/zYu1g3o

 

brief introduction······

How can you improve your software development team? This refined book introduces programmer metrics, a clear and objective way to determine, analyze, and discuss the success or failure of software engineers -- not as part of performance considerations, but to help teams become more cohesive production units.

Experienced team builder Jonathan Alexander explains how programmer metrics help teams understand exactly what's happening over the course of a project, allowing each programmer to focus on specific improvements. Alexander presents a variety of simple and complex programmer metrics to teach you how to build your team.

  • Learn how to change long-held assumptions and improve team dynamics through programmer metrics.
  • Get advice on integrating programmer degrees into existing processes.
  • Ask the right questions to determine the type of data you need to collect.
  • Use metrics to measure individual programmer skills and team effectiveness after a period of time.
  • Identify each programmer's contribution to the team.
  • Analyze responses to software and its features, and verify that programmers are working toward team and organizational goals.
  • Build better teams by using programmer metrics to adjust and replenish people.

 

About the Author······

JonathancAlexander has over 25 years of software development experience and is now the VP of Engineering at Vocalocity. Vocalocity is a leader in cloud-based commercial communication service providers. Before joining Vocalocity, he built and managed software teams at various companies including vmSight, Epiphany and Radnet. He is a graduate of UCLA Computer Science and wrote software for famous author Michaelc Crichton early in his career.

 

 

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=327053029&siteId=291194637