Some thoughts on how to do performance appraisal in agile development

    I had the honor to attend the 2014 World of Tech (WOT) Global Software Technology Summit as a guest this morning, and I learned a lot from Mr. Yuan Bin and Mr. Wang Lijie's speeches on agile practice and agile teams. It's a pity that the two teachers didn't talk much about how the agile development team does performance appraisal because of the time issue. In terms of performance appraisal, Mr. Yuan mainly put forward the ratio of quality bugs and story points as key indicators; Mr. Wang mainly gave guidance on how to assess team performance. Although I asked Mr. Wang some questions at the scene, and Mr. Wang also gave very good answers, there are still some questions about performance appraisal that puzzled me.
    1. Who is the KPI maker and who is the assessor?
    When an agile team does performance appraisal, some key indicators are still given by the agile team as much as possible. Who gives it, CPO, PO, Scrum Master, or Team Leader? Especially when the number of this agile team reaches 50 or even 100 people, and the product is relatively large and needs to be split into multiple subsystems? Who else is going to check? The Scrum Master may have a better understanding of the process, and the product manager may have a deeper understanding of the requirements. The splitting work that is better than the system-level task is done by the CPO, so the CPO has a stronger grasp of the whole. At this time, there will be a problem, that is, assessment has become a multi-level and high-latitude work. So how should the assessment indicators be weighted at this time? How is the assessment carried out?
    2. How to avoid the phenomenon of "big pot rice"?
    In fact, Mr. Wang has already answered this question on the spot, but in any case we cannot deny that the phenomenon of big pot rice is indeed a big trouble. We have practiced an idea mentioned by Mr. Wang. Team members rate each other, but the result is not optimistic. Various reasons lead to very close scores between team members. If the team reaches the stage of self-management or even self-organization, I think the big pot phenomenon will be more difficult to avoid, because at this time, the indicators are more difficult to formulate, the weights are more difficult to measure, and everything is loose and vague.
    3. How to control the granularity of story points?
    Teacher Yuan mentioned a performance measure, which is the ratio of quality bugs to story points. At this time, problems will also arise, that is, how to master the granularity of story points, and how to ensure fairness when splitting? Obviously a common story such as a landing story is much easier to solve than a story involving complex algorithms. At this time, how to add auxiliary conditions or make adjustments to performance appraisal?

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326418090&siteId=291194637