Beyond the Ritual of the Annual Review

Screen Shot 2015 12 07 At 8 15 34 Pm

An awful lot of ink in the past couple of years has been spilled about whether performance management is a fundamentally flawed idea, and whether formal reviews generally – and performance ratings in particular – should be scrapped. For people who would like to understand the arguments on both sides, two articles by friends capture the essence of the opposing positions well: David Rock’s Kill Your Performance Ratings in Strategy & Business and Marc Effron’s We Love Ratings.

I think Effron gets the better of the argument*, but my intention here is to focus on the question of how to do reviews well, not whether to do them at all. The problem with annual reviews, in my experience, isn’t that the idea is inherently flawed, or that (to paraphrase Rock’s argument), the human brain inherently responds negatively to formal evaluation, but that reviews aren’t connected to the other practices required to make them effective. Without precision about what an annual review is meant to do, and without a set of mechanisms for goal setting, feedback and reflection that connect up to annual reviews, the year-end review easily becomes an empty ritual.

The single biggest trap that causes performance management to fail is confusion about different levels of evaluation, and conversations that skip levels or mix them up. At the finest level of granularity is the evaluation of a specific episode or event. We all tend, whether we explicitly capture them or not, to make many judgments about how episodes go (e.g., “the deck wasn’t well structured” or “you brilliantly defused a dynamic of defensiveness with our partner”). Often, however, these evaluations go unspoken, and conversation happens only much later, at the level of evaluation of a pattern across episodes, such as “you produce poorly-structured deliverables” or “you manage difficult interactions masterfully.”

A higher level of aggregation is evaluation of an outcome against a particular objective or in a particular area of responsibility (e.g., “you overspent your budget,” “the XYZ project was successful beyond expectations,” “you hired brilliantly”). One further level up from this is synthesizing an evaluation of all the outcomes involved in a role. Evaluating someone’s performance in her job as a whole involves looking across all the particular outcomes she’s achieved, and making a judgment about the picture that emerges from this whole (e.g., “your performance was about at the level we expected – you fully met four of your objectives, and fell short on one” is a very different interpretation than “your performance was excellent – you delivered well on the most important objective, even in the face of many unforeseen obstacles, while still delivering on three other objectives and deprioritizing an area we all agreed was less important”… where both might be consistent with the same goal-by-goal assessment).

Aside from evaluating the quality of an outcome, we make judgments about why the outcome occurred: evaluating the causes of outcomes. Did the project succeed due to the leader’s unusual insight about a prospective solution or her astute judgment about what partners to bring in and how to get the most of those partners? Or did it succeed because a team member hired by the leader’s predecessor drove the project well, with little direct involvement from the leader herself? This causal attribution might not matter in assessing whether the leader “did a good job,” but matters a great deal in terms of forming a picture of the leader – how she operates now, and where she might need to develop in the future.

Looking across the causes of outcomes, and looking across the whys behind patterns observed across episodes, another, higher-level frame of evaluation is forming an overall “picture of a person.” Forming such an overall picture of an individual – her strengths and weaknesses, her way of operating, how she is evolving over time – requires not just a view of underlying “whys” that get beneath the outcomes achieved, but an accurate perception of patterns across many situations. The leader whose project was a standout success because a team member her predecessor hired drove the work well with little of the leader’s own involvement might be a great manager who makes superb judgments about people and chooses well where to engage versus where to give room – or she might have a pattern of disengagement with the details of the work, which she got away with in this instance through the good luck of someone else’s excellent hire. Either could fit the same evaluation of this outcome – the insight required to differentiate among these possibilities is up at the level of the pattern across many episodes and outcomes. A final level of evaluation involves projecting the “picture of the person” forward in time. When we make judgments about things like “she’s ready to begin managing a larger team” or “he needs to gain experience working outside his area of expertise in order to grow as an executive,” we are engaged in this kind of evaluation.

Part of what’s difficult about performance management is that these fundamentally different ways of thinking can easily get jumbled together in a haphazard way. In my experience, the most common pathologies that sink performance management are:

  • There’s very little feedback at the episode level, and episodes are mostly discussed after the fact, as evidence for conclusions at the pattern level that the manager has already locked in on. This leads to missing information on both sides. The team member doesn’t get the opportunity to learn from data when it can most readily be absorbed, and the manager is at risk of making poor inferences when she lacks the employee’s own understanding of what happened at the episode level.
  • Evaluating outcomes gets conflated with evaluating people. This pattern causes people to depart from being clinical and accurate about what worked and what didn’t, and to stray into the politics of posturing and advocacy
  • There’s an absence of dialogue about the whys behind outcomes. The performance management process then becomes a “big reveal” of the degree to which someone got credit or blame for various outcomes, versus a thoughtful discussion of their overall performance, trajectory and priorities for development
  • The discussion of strengths and weaknesses is treated like a checklist, rather than a thoughtful examination of the specific individual’s profile and how she, with her particular bent and abilities, can best perform and best develop

The pathologies aren’t that difficult to root out and overcome if people are aware of them and guard against them. However, if these pathologies are prevalent in general in an organization, it is very difficult for an annual review process, no matter how well designed (and most are designed poorly), to do any better than reflect and amplify these underlying issues.

I don’t believe that there is “one best way” to design performance management to avoid these pathologies. Approaches to performance management can and should differ widely among organizations based on their performance imperatives, their broader organizational and management systems and their cultures. Nearly all performance management systems, however, should in one way or another deliver on at least six basics:

  1. A closed loop that compares goals set and outcomes achieved, that gives all the relevant people the best available assessment of achievements and gaps
  2. An understanding of the role a person played in contributing to these outcomes – the quality of the decisions they made, how their capabilities or capability gaps impacted results, where their actions moved the needle, and so on
  3. An overall picture of each individual – strengths and weaknesses, skills, patterns of behavior, motivations, etc. – that can inform both that individual’s self-development as well as collective decisions (e.g., assignments, fit with the organization)
  4. Translation of this “picture of the individual” into insights about how they can develop (e.g., “what would a more senior version of this individual look like?”) and into support for their development (e.g., assigning them to roles that promote development, providing coaching and training)
  5. Performance and capability translated into compensation decisions in an appropriate way, which reflects the organization’s overall philosophy of compensation
  6. Transparency about how assessments are made, that enable people to understand what they’ll be measured on and rewarded for (in some cases, tangible and concrete outcomes, and in other cases, perhaps more judgment-based assessment), that establish confidence in the process and promote open dialogue

Not all of these elements need to be put into an annual review process, and in many cases the annual review process will work better if some of these functions get performed in other ways. For instance, at Incandescent, we make an effort to align with team members about the quality of outcomes in each area of responsibility before the review is written, so that the review can be freed up to focus on synthesis and trajectory of development, rather than providing new information about what’s gone how well within an individual’s job.

An organization that places great weight on variable compensation based on individual outcomes – e.g., an investment bank – will naturally skew differently in its approach to year-end reviews than an organization where the review is primarily viewed as a tool for development. None of that implies, however, that any of the six basics need to be neglected. If the year-end process is primarily about “dividing the pie” of available bonuses, for instance, then a frank discussion of strengths, weaknesses and development needs to occur in a different context. Someone might deserve a very high bonus on the basis of results accomplished in a specific, high-stakes context – and it might be essential for them to hear that there are weaknesses they need to address or behaviors they need to change in order to have a successful long-term career.

In the end, there is no substitute for thoughtfulness. The business of providing feedback, coaching development, synthesizing assessments, and connecting assessments to high-stakes decisions about money and future opportunities doesn’t reduce to any silver bullet design. Having a clear understanding of the many moving parts of performance management – the different levels of evaluation and the ways they relate – makes a big difference. The skills and disciplines of this aspect of the art of management can certainly be learned. Like most things worth learning, they can’t fully be reduced to a procedure and aren’t likely to be learned by “doing what comes naturally” without attention to technique.

________

* We’re voting with our feet. At Incandescent we lavish a great deal of time and energy in writing long, narrative performance reviews twice per year. At year-end, we rate each individual against the baseline of the performance expected for them, and tie variable compensation to that rating. We view ratings as like a time trial rather than a race: as an expression of an absolute comparison of what an individual achieved to what we would have expected at their level and in their role vs. as a comparison to peers.


Niko Canner Profile Cropped
Niko Canner
Founder

Niko Canner founded Incandescent in 2013. His work spans the firm’s three major areas of focus: serving as a thought partner to leaders of large enterprises on strategy, organization and innovation; advising founders on the development of their ventures; and partnering with foundations and non-profits engaged in systems change.


View Niko's profile

Next

Self Management is One Part Architecture, Two Parts Culture


Previous

Self Management: A New Architecture