In the process of getting better at something there are two mistakes that hold you back. The first kind is the mistake of not knowing. Not knowing how the market works, which major to choose, what to do.
If I wanted to start a business selling industrial solvents, I suffer under the first error. I have no idea how the industry works (or much about solvents, for that matter). Ignorance holds me back.
Ignorance, however, isn’t too hard to fix. If I spent several months researching, I could probably have a decent idea of how the industry works. If I spent several years working in it, I’d know even more. The first error has a straightforward remedy—learn more.
The second kind of mistake, and far more insidious, is the mistake of believing things that happen to be wrong. If you’ve convinced yourself that a hill is a valley, it will take a lot of climbing before you realize you were wrong. I worry more about the second mistake.
The Map and the Territory
We spend our lives devising theories for explaining the world. These theories form crude maps of the impossibly complex terrain of our lives. We have a map for our careers, a map for our relationships, a map for our beliefs about the meaning in our lives.
Maps are good. Even a map that is wrong occasionally is a lot better than no map. Philosophical skepticism may have its adherents but it’s utterly impractical. You must have beliefs about the world to make decisions, and even imperfect ones are better than nothing.
But the map is not the territory. The territory is alien, strange and perhaps even incomprehensibly complex. Any map-making process undertaken by an individual over the course of one lifetime is going to be error-ridden.
The rational thing to do, is a cost-benefit analysis. If we can invest less resources to fix our map than the benefits of a correct map yield, fix the map. Yet human beings rarely do the rational thing.
Confirmation Bias and Protecting Our Maps
It turns out we don’t follow the rational process for map fixing. Through a set of interesting experiments, psychologists could show that instead of trying to hunt for the information that would force us to change our maps, we instead seek to information confirming what we already “know”.
The experiment was ingenious. Subjects were given a set of three numbers, such as 2, 4, 6 and told it fits a secret pattern. The task was to guess the identity of the secret pattern by suggesting further sets of three numbers, which the experimenter would say either fit or didn’t fit the pattern.
Given only one data point, many possible hypotheses could be dreamt up by the participants. The numbers could be all even, for example, or the middle number could be the average of the first and last.
The rational method for testing these hypotheses would be to choose counterexamples. If you believed the numbers were all even, try 3, 4, 6 and see if it is validated. If it did, you’d know that your only-evens rule was not the correct rule.
This wasn’t how subjects proceeded, however. Instead, they picked examples which confirmed their previous hypothesis. All-evens testers would pick 4, 8, 10 or 2, 6, 12 as candidate patterns, seeking validation for their theory.
The problem with this method was that the actual rule was “any ascending numbers” so the previous two examples would have been valid, but so would 1, 3, 12 or 3, 9, 11. The method of testing hypotheses sought confirmation, even when it couldn’t determine the secret rule.
What relevance does this have outside the laboratory? The relevance is that we look for information to support our theories, not to break them. We try to protect our maps instead of pointing out where they may be flawed. Worse, when we expend energy trying to improve our maps, the methods we default to are unsound.
Looking Around the Edges
The most profitable method to winning the secret-rule game of the experiment is not to pick random counterexamples. After seeing 2, 4 and 6 validate, picking 1, 17, 4 and seeing it fail doesn’t teach you too much. Instead, the best bet is to try to break the edges of your rule: make one odd, flip the order, make two the same.
The same strategy is effective in life: test around the edges of your map, so you’ll know where to redraw. By breaking your map in precise ways, you can get more information than seeking confirmation or pulling counterexamples out of a hat.
I’ll give an example from my business. When I first launched Learning on Steroids, it was unusually successful compared to my previous business efforts and I wanted to know which principles guided that so I could use those insights in the future. Here were some candidate hypotheses:
- Monthly billing over one-time sale.
- Conducting an email-based launch.
- Restricting capacity.
- Restricting registration time.
- Having a clearer service component (in earlier editions I made more emphasis on being able to reach me for feedback)
All of these could have been valid, some combination of them could be or it could be that none of them were the underlying causes of the recent success.
My approach to testing these hypotheses was to vary the different variables individually in my future launches. Later, I did launches that had one-time courses, no capacity restrictions and downplayed the service component. I couldn’t always test each variable in perfect isolation, but in nearly every launch the permutation of these variables was somewhat different.
In retrospect, my hypothesis now is that #2, conducting an email-based launch, is the only consistent winner. Restrictions on capacity has mixed results and restrictions on registration time has a minor, but positive effect. Service components were not important, but that could have been a feature of the price points tested.
My map is far from perfect now, but it is a lot better than it was when I started, which I believe is a large part of the reason my business is generating four times the revenue from when I had made those initial hypotheses.
Researching Edge Cases
You often don’t need to run an experiment to break your map on edge cases and update it to more accurate beliefs. Sometimes simply doing a bit of research can reveal edge-case counterexamples which force you to re-evaluate your thinking.
Cal Newport recently shared an example from his own journey trying to become a tenured academic. Instead of browsing through random examples and trying to confirm his previous hypotheses, he looked for a natural experiment: choose a group of PhD graduates from the same graduating class, but who differed greatly in their eventual success and look at what they did differently in their early careers.
Studying these two groups, the biggest differences were number of papers published (the successful group had more publications) and number of citations, a rough indicator for quality. Using that as a benchmark, Cal could easily hone in on the precise metrics success required in his field.
Research, as opposed to direct experimentation, is useful when the time frame you expect to see results is very long. I could directly experiment on my launch strategy because I could repeat it every 3-6 months. Cal was better off looking for natural experiments because the time frame to observe results was in decades.
Comfort in Contradiction
To me, the idea of map-breaking is unsettling and counterintuitive. Our brains aren’t hard-wired to think this way, so it always takes a deliberate effort to apply.
The challenge to me is being able to be comfortable with spending a lot of mental energy constructing explanatory theories, and then seeking to tear them down. We’d rather spend time building more, rather than admit what we’ve built may be on a shaky foundation.
One step I’ve found helpful to combat this urge (and is often derided by outsiders) is to simply allow yourself to temporarily hold contradictory beliefs. Believing that your theories themselves are a work-in-progress can allow you to recognize the validity of part of the map, even if you don’t know how to connect it to the other parts yet.
Ultimately, confirmation bias is in our nature, and can’t be completely avoided. With effort, however, I think we can remind ourselves to avoid it when we design the larger experiments or research projects to redraw the lines on our map.