Post-Mortems

The Power of Stepping Back: Conducting Post-Mortems

Nothing is more valuable than maximizing the amount we learn from our experiences, individually and collectively. This learning matters too much to leave to chance. A post-mortem discipline is tremendously valuable to institutionalize in a company, with impact that goes beyond the direct learning from the sessions. This impact is felt particularly as part of shaping a culture in which people can engage openly about difficult truths.

Good post-mortems actually begin before the action unfolds, by focusing first on visualization.

Before undertaking an action, try to envision how the action and its consequences will develop over time. Make that visualization as rich and specific as possible: who will do what, what will happen, what the causal mechanisms that drive these consequences will be. In visualizing a path, there are always things we think we know and things we think we don’t know. Try to be explicit about each. What can you see unfolding, and where are you uncertain?

By visualizing what we think will happen before we act, we establish a baseline of what’s expected against which we can then evaluate what actually happens. In most organizations, people take actions and focus simply on whether they got a good enough outcome or not.   If the outcome is acceptable, they move on. This isn’t the way to achieve excellence. By focusing on how an outcome is achieved, good or bad, and understanding variations between what we expect will happen and what actually happens, we learn to create the right outcomes in a consistent, systematic way.

It is valuable to distinguish two specific kinds of post-mortems: outcome assessments and “what went wrong” dialogues.

In an outcome assessment, the objective is to understand what happened and to make an appraisal of what’s good and bad about that outcome. We can tease out six steps in a good outcome assessment discussion

  1. Framing: what outcome were we seeking to achieve? “Where are we in the plot” at this point?
  2. Evidence: what are the facts that are potentially relevant to assessing the outcome?
  3. Synthesis: what conclusion do these facts add up to? Are we on track or off track, did we succeed or fall short?
  4. Identifying Variances: what turned out differently from how we visualized it the work up front? What were the relevant breaks in the process? Where were there positive surprises or things that were just plain different from what we expected to see?
  5. Seeing Patterns: What are the most important factors that explain the relationship between our initial visualization and the outcome itself? If there were gaps between the visualization and the outcome, what was the cause? What can we learn at three levels: about the environment we’re operating in (highly specific features, or at a higher level of generality); about the organization and our own processes, capabilities and beliefs; about the specific individuals involved (strengths, weaknesses, etc.)? Looking at these important whys behind what happened in this case, can we connect the dots to what we’re seeing in other situations? What patterns emerge?
  6. Implications: What should we do about these observations, either related to the specific situation and the objectives we’d set out, or related to the broader patterns we’re observing?

Sometimes it isn’t practical, of course, to include all six of these steps in a single conversation. That’s fine. However, by being explicit about which steps the group is focused on, the conversation will be clearer and more efficient. If one person is putting forward evidence about what happened, another person is trying to synthesize whether the outcome was good enough and a third person is focused on the implications for what to do next, that’s a recipe for a confusing conversation that doesn’t meet any of those objectives well.

In a “what went wrong” dialogue, a group of people who touched a bad outcome sit down to understand what happened and what they can learn from it. The objective of sessions like this isn’t blame or punishment; the objective is understanding and improvement. This discussion, like the others, should naturally follow a logical pattern:

  1. Getting the visualization clear of what should have happened: In order to understand “what went wrong,” what would it have looked like for things to have “gone right”?
  2. Identifying the variances: What broke down, what went differently than expected?
  3. Considering four types of causes: Examine the whys behind these variances, looking at four variables: The individuals involved (their beliefs, capabilities, actions/inaction), the dynamics and working relationship among the individuals, the design in which the people were operating, and the broader environmental context.  Don’t hesitate to move from one kind of cause to others. If one is looking at an environmental cause (e.g., consumers resistant to adopting an innovation), also look through the lens of individual causes (e.g., who anticipated wrongly that they would adopt it, and what beliefs, capabilities, actions were behind this mistake?), group dynamics (e.g., when this consideration was discussed, were team members uninterested or dismissive? did the group converge too fast?) and design (e.g., are responsibilities clear? does the organization design make sense? are processes well specified?).
  1. Identifying patterns: Looking at these causes, which ones appear to be patterns over time and/or across a range of situations? How general are these patterns? (E.g., is an individual’s overconfidence something that shows up only in a certain narrow range of situations, or more broadly? Is a whole class of processes weak in a common way, or just one very specific process?)
  2. Articulating implications related both to addressing the specific situation at hand, and to addressing the broader patterns identified. These implications might be a combination of “things to do now,” “things to do when X occurs,” and “things to watch for” (e.g., not clear if a certain pattern exists, want to get more evidence one way or the other)

Part of the art of having this kind of conversation well is balancing the generative work of examining multiple causal factors, with working toward closure in interpretation by identifying which factors were important in the situation at hand, and even more crucially, which matter most going forward. Parties will naturally see situations differently, especially when things go wrong and natural instincts to be defensive kick in. In fact, these differences in views often have a great deal to do with what went wrong in the first place.

For these reasons, it is important to push past the discomfort and fear of talking about mistakes—particularly about one’s own and others’ weaknesses.

At the most basic level, people are responsible for outcomes, so if we aren’t willing to talk about where people are strong and weak, where people’s ways of operating are effective and ineffective, we won’t be accurate about what’s really going on—and we won’t be successful in improving results. An obvious truth, but one worth underscoring, is that people are better off having accurate, above-board conversations about their weaknesses, rather than being “spared” feedback out of concern that they might, in the moment, be unhappy to learn what others think.

Both “what went wrong” dialogues and outcome assessments should always have a leader. For an outcome assessment, the discussion leader will most often be the leader who is responsible for the area or the initiative, who calls together the relevant people to take a look at what’s been achieved and what hasn’t been. For a “what went wrong” dialogue, the discussion leader should be someone who has enough distance from the situation to be impartial—generally not the leader directly responsible for that result, as their thinking is one of the most important things to examine.

For these different kinds of post-mortems, it is important to record the lessons learned in some written form – sometimes that might be a very brief essence, sometimes more detailed. Otherwise, institutional memory is often lost and, most importantly, it is harder to connect the dots to see patterns that might not be obvious from looking at a single case.

All of these guidelines are meant to take kinds of dialogue that can be difficult to have well, even with the best of intentions, and create a clear method and set of concepts that everyone can refer to together. Just like playing an instrument or playing a sport, better technique yields better results. And, also like these other skills, once one masters a particular form, one can keep the essence while cutting away what’s inessential in any given context or varying the steps to fit new situations. Often it is best to err on the side of being more systematic when learning something new and difficult, and improvisational once a method becomes second nature.

Charlie Munger
Worldly Wisdom in 80 Models
Adam Grant
How to Be an Original