Reviewing knowledge that everyone should know

what is replay

Replay was originally a Go term. It means that after the game, the players will replay the game to discover their own mistakes, understand the opponent's thinking, and study more appropriate moves. Many Go masters regard replay as an important way to improve their chess skills. Later, Mr. Liu Chuanzhi introduced review into the field of management.

Why do a replay?

The purpose of the review is to obtain better feedback by summarizing afterwards.

You may have heard of the 10,000-hour rule, which means that an ordinary person can become a master in a certain field after 10,000 hours of deliberate practice. But in fact there are many areas where this is not the case. The key here is whether or not you practice with feedback. I recently heard a thought experiment about feedback that better illustrates its importance.

If you let a blind person solve the Rubik's Cube without any feedback, how long do you think it will take for the blind person to recover? The calculation result is that there is no way to restore it for 13.7 billion years. In other words, relying on pure probability, there is no way to recover from random twisting. But as long as you add an action, you will give him a feedback every time he twists and turns, whether he is closer to the goal or farther away. It only takes 2 and a half minutes to restore.

The specific operation steps of the recovery

Take project development as an example

First: Review your goals.

Recall the purpose or expected outcome of this version of the requirements? Generally, we use OKR or KPI to evaluate the completion of the goal.

Second: Evaluate the results.

Have you achieved your goals? Or how much is the difference from the expected result? This is not just looking at the overall goals of the team, but also the individual goals of everyone in the team.

Third: analyze the reasons.

Review the whole process before, during and after the event, and analyze the reasons for achieving the goal? What are the reasons for not achieving?

Common analysis ideas are as follows:

  1. Are goals reliable? If reliable, why? How can it be improved if it is not reliable?

  2. Is there any room for improvement in the implementation process? If so, how should it be improved? If not, what is it doing well?

  3. For controllable reasons, we must formulate solutions, and for uncontrollable reasons, we must do a good job in risk control, such as making information transmission smoother and making a plan B plan.

In many cases, it's not that we don't know what to ask, but that we don't know how to ask. Here I recommend the 5why root cause analysis method I wrote yesterday to ask why.

Fourth: Summarize the experience and use it for the next iteration.

Every time you do well, you must summarize it as a transferable experience and put it into the next version for use. For those who do not do well, it is necessary to formulate solutions in a targeted manner, and estimate their effects and costs, and make a two-dimensional four-quadrant based on the expected effects and costs. Prioritize the implementation of low cost, good effect of the program. And put it in the next version to verify.

Common problems in review

The first problem is that the resumption meeting was turned into a dumping meeting.

The more complicated the project, the more problems will come out of the review. Everyone has problems more or less, but most people are unwilling to admit their problems, and are more inclined to find reasons in others. This can easily cause everyone to dump each other. The reason for this situation is that everyone has a wrong understanding of the destination of the review. Although the review is to summarize what happened in the past, the purpose is not to pursue accountability and punishment. The purpose of the review is to make better optimization for the future through the analysis of the reasons.

The second question is, how to review the valuable conclusions and apply them to the next improvement?

The conclusion of the review is not necessarily correct, even if it is correct, it is unlikely to be in place in one step. What we can do is to summarize the experience into an executable plan. Put it into the next version for iterative verification. My own experience is to implement it in three ways: process, system, and list. For example, in this project, we found that meeting once a day is good for the progress of the project, so we can turn the meeting into a fixed daily stand-up meeting, and standardize the meeting rules to ensure the meeting effect.

Here I recommend that you use the list more often. The so-called list is the implementation steps. For example, this time from testing to launch was very smooth, we can summarize a must-check list for launch.

The third problem is that no consensus can be reached on the conclusion of the review.

Everyone has their own cognition, so when replaying, everyone's conclusions may not be exactly the same. We need to seek common ground while reserving differences, and let the most experienced person or the person responsible for the matter make the decision according to the specific situation of the company. Even if it is wrong, we can still make corrections in the next review.

Regarding the conclusion of the review, I have two experiences. One is to respect common sense, and don't think too complicated if it can be explained in a simple way. The other is that for those unconventional conclusions, more evidence is needed to prove them. Such as user behavior data and user research are tools for us to obtain more evidence support.

Commonly used review models

Here are two commonly used review models:

PDCA

One is the PDCA model, which includes 4 links, planning (Plan), execution (Do), inspection (Check) and processing (Act). Its optimization is mainly reflected in the revision of specific work standards, such as management systems, project acceptance standards, etc. , For those points that are easy to make mistakes, they can be organized into a CheckList list, and the solution can be solidified into the process. Every time PDCA cycles, the standard of doing things is verified or optimized once. But PDCA also has its limitations, if the target is wrong, it will not tell you what is wrong.

PDF

The other is the PDF model, which includes 3 links, sand table deduction (Preview), execution (Do), and review (FuPan). PDCA is to divide one thing into 4 links, while PDF is to repeat one thing three times. The first sand table deduction and the third review are all virtual, and the second execution is actually do. Its optimization is mainly reflected in deepening the understanding of the essence of things, but its limitation is that it relies heavily on the personal ability of the reviewer and is not very replicable.

In short, everyone should do a good review. No one is good at birth, and so is the team. Every excellent team is upgraded through iterations. The more timely the replay is, the deeper everyone's feelings will be and the better the effect will be. In the project, everyone needs to participate in the project review, which also means that in addition to the need for improvement at the organizational level, individuals must also customize their own improvement plans.

Guess you like

Origin blog.csdn.net/sys025/article/details/131237202