DevData Talks | Zhang Le, Ru Bingsheng, Ying Kuohao, Ren Jinglei: Review and Prospect of R&D Performance Practice in 2022

The ups and downs of 2022 have become a thing of the past. In this year, we saw both the unpredictable external environment and the steady development of the R&D efficiency industry, shifting from grand concepts and values ​​to specific problems and practical and actionable solutions .

Looking back at the beginning of the new year, what are the keywords for the R&D efficiency industry in 2022? What have the pioneers in the industry done? How do they solve common challenges and difficulties in the process of building R&D efficiency? In the last event of DevData Talks 2022, we invited four industry experts to talk about these topics.

As an emerging industry that is gradually moving towards scale under the pressure of cost reduction and efficiency increase, the accumulation and exchange of knowledge is very important. The pitfalls stepped on by the pioneers, the detours traveled, the patterns and anti-patterns explored, and the accumulated methods and thinking are the most precious wealth of the industry.

In this round table, we gathered such a high-level lineup - the four guest teachers are advanced practitioners in the field of R&D efficiency, with rich practical experience and in-depth thinking; at the same time, they come from different backgrounds and play different roles in the team. Different perspectives will be brought to this dialogue.

I hope this excerpt from the 2-hour live broadcast and the 8,000-word review of dry goods can provide inspiration for your 2023 R&D performance planning.

  • Summary: What have you done in the past year in terms of R&D effectiveness?

  • Obstacles and Challenges in the Process of Efficiency Construction

  • Measure with systems thinking

  • Specific issues in each field

  • Efficiency Improvement Outlook to 2023

1 Summary: What have you done in the past year in terms of R&D efficiency?

Ying Kuohao: Build R&D digitalization in four steps, making data the common language of the R&D team

As a practitioner in the Internet field, frankly speaking, it is difficult to avoid the keyword "coldness" in 2022. Not only is the "graduation tide" sweeping the entire Internet industry, but many companies, including us, have suffered a certain impact on their business.

In this case, " Where is the value of technology to the business? " has become a soul-crushing question. R&D is one of the cost centers of an enterprise. How is its OKR linked to the business and how does it boost growth? Especially in the context of a downturn in the general environment, whether and how to adjust the work center of the R&D team—these issues require R&D managers to think Clear, see clearly, explain clearly.

This is why Ziroom will actively promote the digitization of R&D in 2022. We focus on the beginning and end of project management - the optimization of requirements for project establishment and review and review. Before improving, we wanted to understand the status quo, but found that no data was available. The processes and tools of each team are inconsistent, which is equivalent to talking about each other. So we first unified the process and tools:

  • Process : Inventory the existing process and abstract it into a standardized process with four stages and eight review points
  • Tools : Self-built project management platform, a project management model that can be easily precipitated, to ensure that projects, requirements, and tasks are all online

On the basis of data, we build a measurement system and use it in daily management:

  • Measurement : The idea is top-down, first let the Team Leader of each team understand. Based on our own needs, we have summarized three reports: the result table, the process table, and the business APP table, describing the current status of research and development from the dimensions of delivery efficiency, quality, and stability, and serving as a communication between technical teams, and even between technical and business teams. common language
  • Measurement application : the weekly meeting is used as a fixed node for indicator disclosure, and key indicators and abnormal indicators are discussed

In 2022, the main work of Ziroom in terms of R&D efficiency is to realize the digitalization, measurability, and transformation of measurement into action in the R&D process through the above four steps .

Ru Bingsheng: R&D efficiency enters the deep water area

In the past few years, when talking about R&D efficiency, it may have focused more on the concept and why it was done. The main change in 2022 is that everyone basically agrees with the "why" and starts to ask "how to do it". In some more advanced teams, the obvious pain points that are easy to solve-the so-called "low-hanging fruits" have almost been solved . Efficiency construction has entered the deep water area this year, and we need to solve some more hidden and possibly more fundamental problems.

The first important thing we will do in 2022 is to integrate the tool chain, pull through the entire R&D process , improve the developer experience with near-one-stop tools, and allow them to focus on more creative work such as coding, and let tools replace them. Deal with repetitive process things.

The new direction also brings new challenges: Although the change of the tool chain will bring long-term value to engineers, the most immediate pain is the pain of tool replacement. We have done a lot of work in this regard to minimize the barriers for engineers to switch tools.

The second thing is that performance metrics no longer seek perfection, but focus more on the essential goals . Goal-oriented starts with the end to find a small number of useful indicators, establishes a baseline, finds impact factors, promotes improvement, and continues to track and verify.

The third thing is to pay attention to the developer experience and avoid achieving team effectiveness at the cost of involution and damage to developer satisfaction.

Zhang Le: Solve specific problems more pragmatically

In 2022, the entire technology industry will focus on reducing costs and increasing efficiency, and will pay more attention to R&D efficiency. From my practical experience, there are two key words of the year for 2022:

The first keyword is more pragmatic and more focused . At Tencent this year, we put a lot of energy into the construction of tool platforms and the promotion of cross-BU, and rarely raised concepts or models.

The tool platform integrates the tool chain, as Mr. Ru also mentioned, here I will talk about the two capabilities of the tool platform in depth: automation capability and analysis capability .

Automation sounds like an old term that lacks imagination. But when we look at many integrated platforms on the market, we can find that the automation capabilities of R&D tools have not really been realized. First of all, many platforms only combine tools from different fields into one portal , and the development workflow is still not smooth; on the other hand, many R&D tools have built-in strong process control to ensure that the data seen by managers is reliable. , requiring engineers to do a lot of clicking and switching operations, which brings additional workload.

This year we also observed some industry changes. For example, Atlassian, the parent company of JIRA, has a layout in the fields of code, process management, CI/CD, etc. This year, a platform layer was added to their product matrix. Among the core competencies is automation. Specifically, it is to realize the configuration of automation rules in a low-code/no-code way, and extract cross-domain and cross-product events and processes, such as automatic task dispatching, automatic creation of subtasks, automatic time update, automatic status flow, The automatic triggering of the pipeline, etc., automates the transactional work and liberates the energy of the engineer . It looks plain, but every engineer can use it every day, and the cumulative value is very large.

On the basis of automated tools guaranteeing user experience and data accuracy, performance analysis can be conducted across domains and data sources to understand the end-to-end efficiency, quality and waste of R&D.

The second keyword is to find certainty from uncertainty .

For example, the new concept of platform engineering has been very popular recently. I personally think that platform engineering and the current DevOps practice largely overlap, such as advocating to provide engineers with reusable self-service capabilities to improve development experience and productivity. But this new concept also brings some inspiration, for example, it emphasizes closer to the developer's perspective. This year, we added an application perspective to the tool platform, taking the application as the core to connect codes, branches, pipelines, products, resources, deployment, logs, monitoring, and observability to meet the actual needs of developers and reduce developers' awareness. know the burden.

In terms of measurement, we also focus more on specific scenarios, such as value stream analysis to improve response efficiency, and risk detection in code submission and release, rather than building huge models or launching huge campaigns.

In the next few years, the technology of the software R&D industry may continue to change, and new concepts will continue to emerge, but I think that finding users with real needs and using product ideas to solve practical problems will remain unchanged.

Ren Jinglei: Metrics connect upstream and downstream, open source infrastructure

My summary will be based on the evolution of SMAY R&D performance measurement products. As one of the pioneers in the R&D efficiency industry, some of the evolution ideas of Smoyi can also reflect the latest trends in the industry to a certain extent, so I will also make a summary to see if it is of reference value to everyone.

The first action is the combination of deep code analysis data and data from other fields . In the early days, Smayi mainly focused on in-depth code analysis for measurement, because we believed that the indicators in the code field were closer to R&D, more reliable, and less human-involved. This year, we will communicate the measurement of the coding link with the upstream and downstream, such as connecting with the requirements, calibrating the granularity of the requirements, and making the requirements indicators more reliable; and connecting the tests to measure the coding workload of bug fixes. In line with this action, Smayi developed a platform called Apache DevLake, which is responsible for cross-domain and cross-data source R&D data collection and management, which was open-sourced and donated to the Apache Foundation.

The second action is to introduce the GQM (Goal-Question-Metric) method into the product  . As several teachers said, performance measurement cannot revolve around indicators and data, but must be goal-oriented. I also went back and studied the GQM set of measurement construction methods proposed by the academic circles, known as the R&D measurement de facto standard. I feel that many previous experiences can be used by the industry, so this set of methods is also being fully internalized. In the scene design of Simayi products.

The third action is to reduce the consumption cost of R&D effectiveness measurement data . We have tried many BI tools before, but the general-purpose BI often cannot meet the specific needs of the R&D efficiency field, such as some scenario-based drill-down models and analysis models. We are also developing a display and drill-down tool dedicated to the field of R&D efficiency, so that the data can be truly used by the R&D team.

The fourth action is the integration of the R&D tool chain , which also echoes the summaries of the previous teachers. DevStream, another open source project initiated by SMAY, mainly solves this part of the work. DevStream was donated to the CNCF Cloud Native Foundation and is currently under development.

We have observed that the potential cost of replacing R&D tools is still quite high. Many companies may not be able to afford self-developed one-stop tools, and still have to use tools in each field separately, and even each business R&D team is accustomed to using different tools. Then we hope to use a layer of DevOps tool chain manager to realize the connection and integration between tools around the application, and improve the experience of engineers at the lowest cost.

2 Resistance and challenges in the process of efficiency construction

Ren Jinglei : Both Mr. Ru and Mr. Ying mentioned building and promoting a new tool just now. There should be some resistance in this process. How to deal with it?

Ying Kuohao: Promotion cost of a new tool = new tool experience - old tool experience + switching cost . Then reduce the promotion cost can start from several aspects:

First, polish new tools from a product perspective, focusing on NPS. The first wave of users must be well served to ensure tool reputation.

Second, try to fit the team's existing practices, avoid introducing additional cognitive load, and migrate to new tools smoothly.

Third, pay attention to the compatibility of the two tools during the migration period, such as whether the data produced by the two tools are comparable.

Fourth, take advantage of the trend and do a good job in the preliminary preparations, such as the burying point of the measurement, the selection of the North Star indicator, and win the support of key decision makers before vigorously promoting it, which will get twice the result with half the effort.

Ru Bingsheng : Let me add a little bit of experience in finding the first wave of users.

Our strategy is to find typical and strategic projects in terms of business and technology, and use nanny-style services for these pilot business R&D teams —on the one hand, help them solve problems and see the effect of performance improvement; On the other hand, define real needs through business departments, discover pain points and blocking points, polish products, and achieve a win-win situation.

After the pilot project is successful, it will be commercialized and promoted on a large scale. At this time, the tools have been polished, with clear scenarios, and documents and chatbots have also been built to support large-scale operations. user. At the same time, if the business team still has individual needs, we also welcome co-construction and open plug-ins for secondary development.

Ren Jinglei : They are all fresh first-hand experiences. It sounds like the construction of internal R&D tools is also very close to the initial stage of entrepreneurship. It is necessary to pay attention to the NPS value, consider the threshold of use, find seed users and provide nanny services.

Next, let's talk about the developer experience. Mr. Ru's 2022 R&D efficiency practice mentioned the developer experience, and it seems to be evasive if we don't discuss it. How to let developers benefit from efficiency construction and experience improvement? In fact, this is also the reason why many team efficiency construction cannot be promoted. In the past, many developers may directly equate efficiency with involution.

Zhang Le : I have also been thinking, from the perspective of the R&D team, or from the perspective of managers, wanting to obtain higher production capacity from existing resources is bound to conflict with the work experience of individual developers. Is it game related?

My personal answer is that performance measurement is more of a tool and a means, and it still depends on the purpose. On the macro level, the results and goals of the organization and the individual are aligned. The ideal situation is to improve the efficiency of the organization, business development, and individuals to benefit from it; on the micro level, managers should not simply regard engineers as resources that can be squeezed and objects that are bound. This is for sure Something went wrong.

A more reasonable way of thinking is that on the basis of reasonable production capacity of engineers, instead of assessing working hours to hard roll, it is to solve the specific problems of engineers and reach a consensus among the organization, managers, and engineers .

Our practice this year is also mainly in this area, less improving the problem of lifting and hitting, and more solving real pain points, such as using automated tools to reduce the proportion of manual operations in transactional work. To give a specific example, recently we helped a team use tools to simplify the complex orchestration mechanism across applications and regions, which greatly improved the team's productivity.

Ren Jinglei : I would like to add a little more on the basis of Zhang Le. The current economic downturn and low growth are the new normal, but it may not be a bad time for software engineering .

In the past, there was a lot of hot money, and the product was busy innovating. The engineering side generally polluted first and then treated it. Now that the pace has slowed down, it has given the engineering side more space and time to pay attention to the health and infrastructure of software engineering, which not only reduces waste for the team, but also creates a better experience for engineers.

3 Measure with systems thinking

Ren Jinglei : Next, let's talk about the systematic construction of R&D performance measurement. Mr. Ru mentioned a few key words earlier, goal orientation, building causal loops, continuous measurement and continuous verification, etc. Are there any practical methods or examples to share?

Ru Bingsheng : Let me illustrate with a simple example. For the functional development of urgent business, there is a common R&D model DDD - not Domain Driven Design, but Deadline Driven Development, Deadline Driven Development.

Specifically, the delivery time is fixed, the function is fixed, and the plan is reversed, the purpose is to improve efficiency and deliver quickly. It seems to be able to achieve the goal, but through systematic thinking, you will find that this may not be the case: less time and more tasks, you can only reduce the quality; if the surface quality does not decrease, you can only reduce the internal quality.

In order to deliver features on time, developers often adopt tactical programming without considering scalability, testability, and maintenance costs, and accumulate a large amount of technical debt. These technical debts will drag down the efficiency of future research and development. Generally, within 6-12 months, the speed of research and development will drop significantly. To maintain the speed of delivery, the team relies more on DDD, forming a vicious circle.

So how to break this vicious circle through measurement and lead the team to the right track? The team can allocate part of their energy to measure technical debt-related signals, such as code repetition, comment coverage, test coverage, code change linkage modification, code volume caused by rule changes, etc., to reduce the erosion of DDD on the inherent quality of the software . Thinking in this systematic way allows you to take a higher, longer-term perspective . When the change of business model slows down, the sustainable development ability brought by excellent software engineering practice is the core ability of the organization.

Zhang Le : Just to add. Earlier, Mr. Ru mentioned causal loops, including reinforcing loops, regulating loops, etc., but the cost of identifying causal relationships is relatively high after all. In practice, we often look at the correlation between data indicators to construct the influence loop of each factor .

To give a microscopic example, how to reduce the online defect rate and improve the quality of software? We may feel that the code review was not done well enough before, so let's see if there is a corresponding change in the online defect rate after the code review is improved. This is not only to see whether the code review has been done, but also to see whether the code review has been done seriously. For example, what is the density of review feedback? Is the feedback given by the judges meaningful or just a formality? Continue to drill down to the depth to analyze the correlation of various factors.

Let me give you a macro example, which Mr. Ying talked about earlier. How to explain the impact of IT indicators and business? Although the business is affected by many factors, we can at least analyze the correlation between the two, such as whether the delivery throughput, response to the market, and business growth speed are related? Is quality built-in correlated with user satisfaction and customer complaint rates?

When we interpret metrics, experimental thinking is very important. The industry has proven time and time again that no solution is 100% effective. To verify whether a certain practice is really effective for your team, you can continue to put forward hypotheses, and then verify the hypotheses through data correlation analysis.

Ren Jinglei : The so-called "measuring in a systematic way" is also to avoid measuring for the sake of measuring, and only looking at the numbers after measuring, which will lead to slapstick behavior. This question is also of interest to many viewers. How to prevent performance measurement from becoming a game of chasing numbers?

Ying Kuohao : Let me answer this question in three points.

First of all, as long as metrics are used for management, it is inevitable that some teams with performance problems will want to take shortcuts and seize loopholes to brush indicators. Personally, I don't recommend that teams responsible for tools or performance do endless offense and defense , which consumes energy on both sides. Swiping data is essentially a matter of values, and there are historical records that can be traced back. If the business R&D team does not want to deviate from the values, it should have a certain degree of self-discipline.

Secondly, managers need to reflect on whether they are trying to get quick results and link R&D efficiency with performance appraisal, promotion rewards and punishments? Once it is hooked up, some people will inevitably take risks. Since managers want to assess the bug rate, the easiest way is to find bugs and not report them. Of course, we don't want to see this. After all, software R&D is still a knowledge industry. It is a very stupid management behavior to charge by piece like traditional manufacturing. We must carefully consider whether it is performance-linked .

In the end, performance measurement is not for comparing with other teams, but more for vertical comparison within the team/project to see if there is any improvement . The purpose of management is to inspire goodwill, and the same is true for R&D effectiveness. The purpose is to motivate everyone to make positive changes in practice.

Ren Jinglei : I agree with Mr. Ying's first point. The matter of chasing data is indeed unavoidable, or to a certain extent, there is no need to avoid it. Even if it is not a quantitative indicator, like OKR and KPI, there will be a situation of catching the loophole game.

The positive improvement brought about by measurement and the pursuit of data are like two sides of a coin. Perhaps more attention should be paid to how to enlarge the good side and increase the cost of the other side , such as considering the mutual checks and balances between indicators when designing the performance measurement system. As long as front-line developers finally put most of their energy on practical improvement, the purpose of measurement will be achieved.

Ru Bingsheng : Our previous book quoted the concept of the Hawthorne effect - as long as the measurement indicators are read by people, there will inevitably be those who are measured to follow the data and create data. Human nature cannot be violated.

I think the key to avoid chasing data is to avoid over-emphasizing single-point indicators when measuring, but to use a multi-dimensional, mutually checking and balancing indicator matrix. Deliberately chasing a certain indicator, other indicators will betray you. To improve the indicator matrix, the only way is to truly improve R&D efficiency. This systematic design to achieve heteronomy will help the person being measured to be self-disciplined naturally .

Zhang Le : The effectiveness measurement index system includes outcome indicators and process indicators. The result indicators are suggested to be less and more precise, which are used to guide and drive the overall direction. Due to the many factors involved in this type of indicators, the game cost is relatively high; the process indicators are easier to be gamed. It is recommended to give the team autonomy and let the team decide by themselves Whether to use it, reduce exposure and assessment, and emphasize the role of process indicators in finding and solving problems, which can also reduce the phenomenon of chasing data.

Secondly, both Teacher Ren and Teacher Ru mentioned that a set of indicators should be used for measurement, including the North Star Indicator, the Star Indicator and the Fence Indicator, so as to avoid short-sighted games.

Finally, back to the point of value communication: performance measurement is not a finite game, but an infinite game. If you play in the way of chasing data, you will definitely not be able to continue playing later, and guide everyone to think more deeply about the cost of game behavior.

Ren Jinglei : Here we are chasing a question from users: Is the design of the measurement platform from top to bottom? In the indicator design, should the process indicator be made first or the result indicator be made first, which one is better to do first?

Zhang Le : I understand that we must start with the outcome indicators and start with the end in mind . The measurement of process indicators, problem mining and even improvement are to achieve the big goal, and the big goal is reflected by the result indicators.

After the result indicators are selected, let each R&D team understand the status quo first, and then the paths of different teams to achieve the goals may be inconsistent, then each team chooses process indicators by itself to discover and solve problems.

Here again echoes the "goal orientation" that has been mentioned many times before. The book "Google Software Engineering" says that if you can't do anything with the metrics, then don't measure them at all. Before making measurements, the goal must be considered very clearly.

Ren Jinglei : Briefly add  the difference between the GQM (goal-question-indicator) method and the GSM (method-signal-indicator) method used by Google. The GQM method puts more emphasis on drilling down through layers of questions to establish an implicit model. Therefore, the GQM method is more applicable to fields such as R&D efficiency where the influencing factors are more complex, there are more dimensions that can be disassembled, and the individual needs of the team are more prominent.

Zhang Le : Looking at it now, I also think that the GQM method is more suitable. The Q questions in GQM can be divided into two categories, one is descriptive questions, which is the process of dismantling and refining the goal, breaking down vague ideas into measurable aspects; the other is exploratory questions, which are In the process of experimenting to find a solution, first assume a series of influencing factors, and then verify the hypothesis through measurement. In general, using questions to stimulate thinking in GQM can build a stronger bridge between goals and indicators.

Ren Jinglei : Mr. Ying's three major tables—result table, process table, and business APP table—are impressive. Can Mr. Ying share the design logic of these three tables?

Ying Kuohao : I think design logic and GQM are the same. Ziroom uses the OKR system very deeply, and it is designed under the same goal-oriented thinking as GQM. Ziroom has several requirements for the goals in OKR. One is to make the business understandable, the other is to have a clear comparison between the status quo and the goals, and the third is to be measurable.

With a clear goal, we consider how to quantify the work done by the R&D team in the value chain to achieve the goal.

First look at the results of R&D activities: delivery efficiency, stability, quality and other dimensions, this is the result table; if you see anomalies in the result table, then look back at the detailed process, so that the R&D team can independently understand the root cause, this is the process Table; see the performance of the product on the user side, such as satisfaction, daily activity, etc., this is the business APP table. In the later stage, we will continue to iterate according to the needs of R&D managers and front-line engineers.

4 Specific issues in each field

Ren Jinglei : Next is a question in the field of demand: how to manage the quality of demand? How to reflect the quality of demand through numbers?

Ru Bingsheng : Demand quality is a relatively seldom mentioned but very important topic. Poor demand quality will lead to repeated communication and even rework between products and R&D, which greatly affects the efficiency and quality of software delivery.

Several points that may affect the quality of requirements: Is the format of the requirements standardized? Are the materials complete? Can the requirements description be semi-structured? Even, is the requirements document too short? Just like what we said above, start with the end in mind, the team can consider how to strengthen the norms or set up access control based on their current practical problems.

Ren Jinglei : We have a customer's practice and share it with you. The client was concerned with the quality of goal-oriented requirements, that is, those requirements that could be linked to quantifiable business performance. After the requirements are delivered, track the achievement rate of the value of the requirements, and then deduce the quality of these target requirements.

Zhang Le : Based on what the two teachers said, it is still a relatively complete framework to emphasize standardization beforehand, pay attention to the change of requirements and the rework caused by them during the event, and pay attention to the fulfillment of requirements after they go online .

Ru Bingsheng : There is a paradox here, that the R&D team may not necessarily pay attention to the value of the business side? Product teams should be paying attention.

Zhang Le : The R&D team sometimes questioned that the demand that the product asked me to do seemed to be of little value? At this time, both parties need to have a basis for consensus.

Ying Kuohao : Yes, our R&D team has reviewed the annual requirements. The R&D team has completed 6000+ requirements in a year, of which 1800+ are invalid pseudo-requirements, including functions that have not been launched and have very few daily activities. Products sometimes don't know what they want.

Therefore, when it comes to practical improvement, one is standardization , which has precipitated the simplified practice of "requiring a piece of paper", and requires the product side to clearly answer the question: Who are the users? What is the scene? What are the pain points? What kind of function is expected to be achieved? What resources are needed? Second, I also encourage product managers and R&D engineers to question each other and discuss the issues clearly . From business to product to R&D, all parties are responsible for the final delivered value.

Ren Jinglei : The next question is the quality dimension: how to build a software quality system? Quality built-in development idea? What should an entrepreneurial team do in terms of quality measurement on the road of pursuing quality-efficiency integration?

Ru Bingsheng : If it is a start-up team, the quality measurement is a bit redundant. The top priority is to survive and prove the market demand of the product and the establishment of the business model. It is no problem to use tactical programming at this stage.

So what should I do if I owe a lot of technical debt and don't do quality built-in at all? At a certain time point, generally within 6-12 months, directly overthrow and rewrite a version with built-in quality and controllable technical debt . Once at this stage, it is necessary to measure and control the quality, and strictly control the increase in technical debt. Even if the software goes online with a disease in some emergency situations, the technical debt must be repaid within a maximum of four iterations.

Ren Jinglei : One more thing to add. In the context of start-up companies, the development manpower may be limited, so it is necessary to  be aware of the risky modules with high incidence of bugs or high cost and time-consuming bug fixes, and devote more time and energy to these weak links, so as to achieve leaner implementation Quality built in.

5 Outlook for performance improvement in 2023

Zhang Le : Last year, a simple model "Golden Triangle of R&D Efficiency" was proposed, which are efficiency practice, efficiency platform and efficiency measurement. The main goal in 2023 is to continue to promote in the company, so that this triangle can accelerate and become more self-consistent.

Specifically, in terms of practice, we will continue to improve the end-to-end flow of value, and do more explorations in the integration of business and production research efficiency ; in terms of platforms, in addition to promoting relatively simple automation, we will also explore some intelligent applications, such as Intelligent search, AI-assisted programming, intelligent precise testing, intelligent operation and maintenance, etc.; in terms of measurement, the GQM method is worth further digging to discover more effective scenarios that can solve specific problems and control costs , as well as automatic performance insights and even intelligent deduction It is also a direction worthy of attention.

Ru Bingsheng : As mentioned many times today, the industry has verified that there is no 100% applicable solution, and the solutions used by major manufacturers may not be suitable. Therefore, the key in 2023 is to return to your own specific problems and solve problems in a down-to-earth and pragmatic manner. It is necessary to align the goals of performance practice from top to bottom, and to optimize some basic practice norms from bottom to top.

In the past few years, we can see that, driven by cost reduction and efficiency increase, major manufacturers have paid more attention to R&D efficiency, and have also expanded the recruitment of many related positions. However, one is that the low-hanging fruit has almost been picked, and the other is that the size of the R&D team may not change much in the short term, so whether there will be continuous investment in performance is actually questionable. In 2023, the R&D efficiency team needs to be more result-oriented and actually create value for the business R&D team .

Ying Kuohao : I have three key goals in 2023: first, shut down microservices and switch to subtraction, control technical debt, and measure the key few; Use tools and metrics on a large scale to really help the front-line R&D team; third, for business empowerment, consider building a platform that directly drives business.

Ren Jinglei : Make basic capabilities such as GQM more universal, lower the threshold for using R&D performance measurement tools, and allow more users to perceive the value brought by R&D performance measurement at a lower cost.

full video

DevData Talks [R&D Effectiveness] Year-end Q&A

"About DevData Talks"

DevData Talks is a series of columns that openly share the practical experience and methodology of R&D efficiency.

We will invite industry experts to share advanced practices and in-depth thinking related to R&D efficiency improvement, digital management, etc., and continue to accumulate high-quality dry goods content. Discuss the practice and thinking in the field of R&D efficiency with partners, communicate, learn and grow together.

A lone traveler is fast, and a group of travelers is far away.

Guess you like

Origin blog.csdn.net/simayi2018/article/details/128579175