How to Read Companies’ Success Stories

As I argued before (read this blog post on the halo effect), we like simple stories that account for a company’s success or failure. These stories are almost always delusional. So, what should you keep in mind when you pick up the latest best-selling business book written by a famous CEO or management thinker? The answer: beware of biases. In a beautiful little chapter called ‘The Illusion of Understanding’ in Thinking Fast and Slow, Nobel Prize winner Daniel Kahneman gives an example of the biases involved when trying to explain Google’s tremendous business success:

The ultimate test of an explanation is whether it would have made the event predictable in advance. No story of Google’s unlikely success will meet that test, because no story can include the myriad of events that would have caused a different outcome. The human mind does not deal well with nonevents. The fact that many of the important events that did occur involve choices further tempts you to exaggerate the role of skill and underestimate the part that luck played in the outcome. Because every critical decision turned out well, the record suggests almost flawless prescience  ̶  but bad luck could have disrupted any one of the successful steps. The halo effect adds the final touches, lending an aura of invincibility to the heroes of the story.

Kahneman does not deny that there was skill involved in creating a great company like Google. But, since there are not many opportunities to practice how to build a great company, luck has a greater impact than skill in a business environment. (As opposed to for example Roger Federer’s grand slam record: there’s more skill than luck involved here.)

What you need to do, then, is be aware of a number of biases, fallacies and hidden statistical rules that could be at play when reading a business success story. The next sections will briefly explain the ones I found in Thinking Fast and Slow that relate to false explanations of business success.

The Halo Effect

Defined as the tendency to like or dislike everything about a person or a company (including things you have not observed), the halo effect can be an obvious bias in business books. In Rosenzweig’s book on the halo effect, he phrases it like this:

[The halo effect is] the tendency to look at a company’s overall performance and make attributions about its culture, leadership, values, and more. In fact, many things we commonly claim drive company performance are simply attributions based on prior performance.

In other words, we like the performance of the company and then attribute that performance to things like culture, leadership, management techniques and more. We start to like everything about the company, and, thus, are creating a halo. The actual driver of performance, the causal link, is usually not exposed: there is surprisingly little quantitative data that links performance to a leadership style or a management technique (including highly popular ones like Agile). The halo effect stands between us and judging different elements of the organization in isolation (leadership, strategy, structure, culture, etc.): it’s either all good, or all bad.

The Narrative Fallacy

A narrative fallacy is a flawed story of the past that strongly shapes our view of the world and, not unimportant, our expectations of the future. The problem is, of course, that when we construct why things happen in stories, we are often wrong, over simplistic, too concrete and this can thus not serve as a blueprint for future success. Kahneman:

Narrative fallacies arise inevitably from our continuous attempt to make sense of the world. The explanatory stories that people find compelling are simple; are concrete rather than abstract; assign a larger role to talent, stupidity, and intentions than to luck; and focus on a few striking events that happened rather than on the countless events that failed to happen. Any recent salient event is a candidate to become the kernel of a causal narrative. (..) we humans constantly fool ourselves by constructing flimsy accounts of the past and believing they are true.

This is a very powerful trap: we just like stories too much. And once there is a convincing story, we fail to ask more questions.

WYSIATI

We actually can create better stories if we have less information (exacerbating the narrative fallacy!). Less data makes it easier to create a coherent story. What you see is all there is (Kahneman uses the rather ugly abbreviation of WYSIATI):

At work here is that powerful WYSIATI rule. You cannot help dealing with the limited information you have as if it were all there is to know. You build the best possible story from the information available to you, and if it is a good story, you believe it. Paradoxically, it is easier to construct a coherent story when you know little, when there are fewer pieces to fit into the puzzle. Our comforting conviction that the world makes sense rests on a secure foundation: our almost unlimited ability to ignore our ignorance.

Instead of asking yourself ‘What would I need to know to create an informed opinion?’ your brain will go along with many stories that sound intuitively right. But, as I argued elsewhere, intuition is usually wrong when it comes to explaining complex problems or environments. Since the business environment is indeed a complex environment (maybe even a complex adaptive system with lots of positive feedback loops), judging a company on little information only feeds the narrative fallacy and, possibly, the halo effect.

Hindsight Bias

We tend to change the beliefs we previously held in line with what actually happened. It is thus really hard for us to recall what our previously held beliefs were, once they have been changed by an actual outcome. If you ask people to assign probabilities to certain scenarios beforehand; then show them the actual outcomes and ask them again what their initial probability ratings were, they will overestimate the probability they assigned in the past to the scenario that actually played out. This is a problem. Because this so-called hindsight bias feeds the narrative fallacy; in the sense that CEOs or entrepreneurs might be portrayed, in hindsight, as having assigned the right probability to the chosen scenario that caused the company to thrive. Hindsight bias makes these leaders into true visionaries. They were probably just lucky according to Kahneman:

Leaders who have been lucky are never punished for having taken too much risk. Instead, they are believed to have had the flair and foresight to anticipate success, and the sensible people who doubted them [i.e. who assigned better probabilities beforehand] are seen in hindsight as mediocre, timid, and weak. A few lucky gambles can crown a reckless leader with a halo of prescience and boldness.

Regression to The Mean

Extreme groups (including over- and under- performers in business) will regress to the mean over time. This is a statistical fact, there’s no cause. Kahneman, when discussing regression to the mean, directs his attention to some of the most lauded management books:

The basic message of Built to Last and other similar books is that good managerial practices can be identified and that good practices will be rewarded by good results. Both messages are overstated. The comparison of firms that have been more or less successful is to a significant extent a comparison between firms that have been more or less lucky. Knowing the importance of luck, you should be particularly suspicious when highly consistent patterns emerge from the comparison of successful and less successful firms. In the presence of randomness, regular patterns can only be mirages. Because luck plays a large role, the quality of leadership and management practices cannot be inferred reliably from observations of success. And even if you had perfect foreknowledge that a CEO has brilliant vision and extraordinary competence, you still would be unable to predict how the company will perform with much better accuracy than the flip of a coin.

Conclusion

What I am not suggesting is that leadership style and management techniques do not matter. Of course, they do: not implementing best business practice already puts your firm at a disadvantage. What I am suggesting, however, is that company performance is less influenced by management and leadership styles than you would like. It all boils down to two things (Kahnemann):

  • Is the environment sufficiently regular to be predictable?
  • Is there an opportunity to learn these regularities through prolonged practice?

If the answer is yes for both questions, you find yourself in an environment where you can acquire a skill. This is why you can acquire high proficiency in tennis, surgery and firefighting (and probably in some management techniques such as Agile or performance management), but not in overall business management: the business environment is not sufficiently predictable and there’s no opportunity for prolonged practice (how many chances does an entrepreneur get to build Google?). Sure, a good CEO makes a difference. And please read her latest book. But, while reading, remind yourself of the pitfalls described in this article:

  • the halo effect;
  • the narrative fallacy;
  • what you see is all there is;
  • hindsight bias;
  • regression to the mean.

To conclude, a remarkable quote from The Economist on managers in football that might back up my claims in this post:

Fans lay most of the credit or blame for their team’s results on the manager. So do executives: nearly half of clubs in top leagues changed coach in 2018. Yet this faith appears misplaced. After analyzing 15 years of league data, we found that an overachieving manager’s odds of sustaining that success in a new job are barely better than a coin flip. The likely cause of the “decline” of once-feted bosses like Mr Mourinho is not that they lost their touch, but their early wins owed more to players and luck than to their own wizardry.

Please share my blog post with your networkEmail this to someone
email
Share on LinkedIn
Linkedin

On the Need for Redundancy in Project Plans

IMG_0541 (800x597)

As a preface to this blog entry, forget about the strict definition of redundancy found in the Oxford English dictionary that tells you redundancy means ‘no longer needed or useful’. As will become clear – I hope -, redundancy is needed to have an option to maneuver when things happen that you could not possibly have predicted. In other words, redundancy gives you optionality. See redundancy not as superfluous but as an insurance. Or, as an investment even.

Projects fail to meet deadlines and budgets all the time. While rereading random passages in Nassim Taleb’s stimulating and provocative books The Black Swan and Antifragile, it struck me that I always try to build a little redundancy in project plans because ‘you never know what will happen’. Almost never taking time to consider the rationale behind my sub-conscious whispering ‘you never know what will happen’.

Taleb shares some nice insights on the reasons why redundancy (in general) is useful. I hope reading this blog entry will make you look at redundancy – and the world – from a somewhat different angle. (In a way, it is what this blog is all about: discussing concepts and tools that change the way you look at the world. Remember Proust? ‘My destination is no longer a place, rather a new way of seeing.’)

I will discuss two of Taleb’s concepts that will make it easier for you to include some well contemplated redundancy in your future projects. Including redundancy will definitely increase the success rate of your projects. But, you need to be able to explain why you included it in your budget. Here’s how to do that.

Concept 1: the world is more random than you think

Taleb constantly challenges you on how you look at the world. One of the main themes in his books is that the world is more random than we think, and that we are often fooled by this randomness. In Antifragile he argues:

Black Swans (…) are large-scale unpredictable and irregular events of massive consequence – unpredicted by a certain observer (…). I have made the claim that most of history comes from Black Swan events, while we worry about fine-tuning our understanding of the ordinary, and hence develop models, theories, or representations that cannot possibly track them or measure the possibility of these shocks.

Black Swans hijack our brains, making us feel we “sort of” or “almost” predicted them, because they are retrospectively explainable. (…) Life is more, a lot more, labyrinthine than shown in our memory – our minds are in the business of turning history into something smooth and linear, which makes us underestimate randomness.

In The Black Swan, Taleb claims that large scale events cannot be predicted (and are in effect random to the observer):

I discovered (…) that no researcher has tested whether large deviations in economics can be predicted from past large deviations – whether large deviations have predecessors, that is. (…) My results were that regular events can predict regular events, but that extreme events, perhaps because they are more acute when people are unprepared, are almost never predicted from narrow reliance on the past. The fact that this notion is not obvious to people is shocking to me. It is particularly shocking that people do what are called “stress tests” by taking the worst possible past deviation as an anchor event to project the worst possible future deviation, not thinking that they would have failed to account for that past deviation had they used the same method on the day before the occurrence of that past anchor event.

In Antifragile, he goes as far to call this a mental defect:

I have called this mental defect the Lucretius problem, after the Latin poetic philosopher who wrote that the fool believes that the tallest mountain in the world will be equal to the tallest one he has observed. (Note: Read the wonderful poem on science and philosophy by Lucretius called On the Nature of Things. In the 2007 Penguin edition, the translation of the passage Taleb is referring to actually reads as follows: “And any stream will seem to be, to one who’s never seen a larger, the greatest of rivers (…). Indeed, anything we see, we shall imagine, is the largest specimen of its kind if it’s the largest we’ve laid eyes on.”)

So, taking into account that randomness is a fact of life and we cannot predict big events, there needs to be some redundancy to counter for this. The next time you sit down and think through the things that can hurt your project plan, think ‘randomness’ and ‘Lucretius problem’.

Concept 2: the world is more random than we lead ourselves to believe

A second reason why you need redundancy – there might be more reasons, but my aim here is to introduce two new ways of looking at the world offered by Taleb – is the narrative fallacy. Planning is nothing more and nothing less than a narrative – a story – you create around your project based on your previous experiences with projects. You want this story (the budget and the planning) to unravel according to plan. You take with you all the things – the good and the not so good – that happened in previous projects, create a narrative why this happened, and include that knowledge in your current plan. I wrote about narratives and stories before in my blog entry Successful Businesses and the Halo Effect. It’s almost as if rereading Rosenzweig’s comments on stories in the Halo Effect when Taleb writes:

We like stories, we like to summarize, and we like to simplify, i.e., to reduce the dimension of matters. The first of the problems of human nature that we examine in this section (…) is what I call the narrative fallacy. (…) The fallacy is associated with our vulnerability to overinterpretation and our predilection for compact stories over raw truths. It severely distorts our mental representation of the world; it is particularly acute when it comes to the rare event.

And:

If narrativity causes us to see past events as more predictable, more expected, and less random than they actually were, then we should be able to make it work for us as therapy against some of the stings of randomness.

Needless to say, Taleb argues that we are ill prepared for randomness and will always be fooled by our tendency to attach an explaining narrative to events in the past. The narrative, however, will never prepare you for events in the future.

Redundancy as buffer against unforeseen events

The combined effects of randomness of the environment, the Lucretius problem, and the narrative fallacy create a background where you can easily underestimate the impact of unforeseen events in your project (or business plan, or – even – life itself). Build in some redundancy and use the concepts discussed here to rationalize your hunch that tells you: ‘you never know what will happen’. It’ll make your project plans more robust and realistic, and it’ll give you options. A final word from Antifragile:

Redundancy is ambiguous because it seems like a waste if nothing unusual happens. Except that something unusual happens – usually.

Please share my blog post with your networkEmail this to someone
email
Share on LinkedIn
Linkedin