“It ain’t what you know that gets you into trouble. It’s what you know for sure that just ain’t so.” – Mark Twain1
Most people think about learning as adding knowledge and skills. When you learn French, you learn that the word, avoir means “to have.” You now have a new fact in your mind that didn’t exist before.
Adding knowledge like this, I’d like to argue, is actually the less important case. The most useful learning isn’t usually a strict addition of new knowledge, but first unlearning something false or unhelpful.
To see why unlearning might matter more than strictly additive learning, consider that, for any area of your life in which you operate regularly, you must acquire facts and knowledge about that area. You need to understand your work, where you live, the language you speak, culture you exist in, etc..
This means that, for the parts of your life that matter for you right now or in the past, you already have quite a bit of knowledge. For new knowledge to come in, and not replace or alter anything you know already, that knowledge either needs to be about a detail in your current life that was too insignificant to merit observation earlier, or it has to be about a domain with which you still have relatively little experience.
That additive learning must pertain to either unfamiliar domains or relatively unimportant aspects of domains you are familiar with may seem a strange fact. Most of what we learn in school is of this type of additive learning. Yet, by this very assertion, much of it must not be very important for our day-to-day lives.
When you first need to unlearn or modify what you currently know, that knowledge must involve something you had already learned. This doesn’t guarantee that it is important, however, since most of our knowledge is acquired for living practically in the world, there’s a higher chance something that must be first unlearned is more important.
Types of Unlearning
There’s different ways you might unlearn something in light of new information. The first is a straightforward refutation of the old idea. If you thought that Abraham Lincoln was the first American president and then read in a book it was actually George Washington, you might, if you believed the book, completely revise your view.
This complete refutation is atypical. More likely the new knowledge doesn’t contradict the old one, but it may modify it in some way. If I believe my best friend is very trustworthy, but I learn he is cheating on his wife, I may not completely revise my opinion of him, but I may trust him a bit less or trust him less in marital matters.
Other times new knowledge revises a simpler picture by filling it with more complex details. This is similar to adding new knowledge, although because the older, simpler view of the issue has been overwritten with more detail, there is some unlearning going on. When Albert Einstein discovered special relativity, this overthrew Isaac Newton’s laws of motion. However, this wasn’t a complete refutation, but a modification—Newton’s laws still hold approximately in areas where near light-speed or extreme gravitation aren’t issues.
In all of these cases, however, you have to first let go of something you thought you understood to make way for a new understanding. This isn’t always easy to do.
Difficulties Unlearning
The first challenge of unlearning is that when something contradicts your current understanding, you are likely to dismiss it. This may be adaptive in a world where many of the things people say or information you encounter are false, or lies constructed to manipulate you. Things that you don’t currently believe are, ceteris paribus, more likely to be false. However, this confirmation bias can make it harder to unlearn when that’s valuable to you.
A deeper problem, I believe, is that human beings tend not to deeply represent doubt and uncertainties in a fine-grained way. That is, the things you believe now, you tend to believe completely, even if provisionally. However, whether those beliefs are near-certain or highly-doubtful, the way they are represented in the brain is much the same.
It’s true that a more doubtful belief is more likely to be dismissed than a certain one. If I try to argue that the moon is made of cheese, for instance, I’ll be met with a lot more resistance than if I try to argue something you only believed loosely. However, this revision occurs in an active sense—when one is directly assessing reasons for the belief in question. I believe that, when a belief isn’t being actively considered, it can still inform your thinking in other ways and, that, in those cases the relative certainty of the belief isn’t used.
If this view is true, then that means that many of the things we learned aren’t dangerous because they are immune to counterargument, but because they can subtly influence our thinking in adjacent areas when we aren’t being vigilant to how likely they are to be true.
If this sounds confusing, consider the example I mentioned earlier: a best friend who you discovered was cheating on his spouse. However, suppose you didn’t learn it firsthand, but through a rumor by a third-party. You don’t dismiss the charge outright, but you tentatively accept that there’s some probability that your friend is being unfaithful. If forced to confront this belief directly, through debate or reasoning, you might come to the conclusion that there’s only a minor chance that he is cheating. But, consider instead, if someone asked you, in an unrelated context, of whether your friend ever lies to get what he wants in business. Now, it’s my opinion that this latent, provisional belief that “X cheats on his spouse” may implicitly inform your intuitions about his trustworthiness, even though that belief itself may not be very reliable.
The intuition I want to present is that beliefs, in our capacity to inform us, tend to be a lot more black-and-white as either believed or completely dismissed, rather than, a more accurate picture where many beliefs tend to have middling likelihood of being true. While we can have more nuanced views when the belief is being debated directly, the dangerous case is when they are being used to infer about other topics, yet their doubtful status is simply being ignored to make that inference.
The main challenge of unlearning, therefore, is that most of our false or doubtful assumptions about the areas that impact our lives are never examined. We use these assumptions to operate, but because they aren’t actively reflected upon, studied or challenged, they maintain their full force, even if fairly simple arguments could overturn them.
Learning as Stamp Collecting Versus Diving into Strangeness
I see two main views of learning. The first is like stamp collecting. The person wants to collect more and more knowledge, mostly for the purposes of showing it off to people they want to impress. The knowledge here is largely inert and unimportant for their lives—it’s just a collecting hobby accruing more facts and ideas.
There’s nothing wrong with stamp collecting. Knowing facts and ideas, even if they aren’t particularly useful or central to our lives, isn’t a bad thing. It’s probably a superior hobby to many other pursuits, since knowledge can, at least some of the time, spillover to more practical consequences.
The other view of learning, however, is centered around unlearning. This is the view that what we think we know about the world is a veneer of sense-making atop a much deeper strangeness. The things we think we know, we often don’t. The ideas, philosophies and truths that guide our lives may be convenient approximations, but often the more accurate picture is a lot stranger and more interesting.
Stamp collecting is more popular than diving into strangeness. For one, it is strictly additive. Every new trivia fact, book of the month and water cooler topic gets added to your collection, which you can whip out in conversations and impress people who want to talk about them.
Diving into strangeness, in contrast, involves a cyclical process of first undermining the things you thought you had learned. Facts, ideas and theories, are no longer a comforting collection, but a temporary foothold as you leave them to try to get to something deeper.
What is Strange?
Almost everything is much, much weirder than it looks at first. Science is the clearest example of this. Subatomic particles aren’t billiard balls, but strange, complex-valued wavefunctions. Bodies aren’t vital fluids and animating impulses, but trillions of cells, each more complex than any machine humans have invented. Minds aren’t unified loci of consciousness, but the process of countless synapses firing in incredible patterns.
Science confirms the underlying weirdness, but for most people, knowing science is another kind of stamp collecting. Knowing quantum strangeness doesn’t overlap with most areas of practical life, so it can be an additional fact or idea one knows and can bring out in conversations.
More interesting, for me at least, are all the skills and knowledge that we depend on and use everyday that have hidden weirdness beneath them. When you remember something, did it actually happen that way? When you give a reason for your behavior, did reasoning have anything to do with it? When you think that achieving something will make you happy, will it?
Just as science has incredible depths of strangeness underneath, everyday life also floats calmly upon a deeper weirdness that first requires unlearning in order to appreciate.
Unlearning and Local Maxima
Unlearning is unpleasant for most people. Finding out something you thought you knew was false, or a misleading simplification, feels bad. Since strangeness tends to predominate, and we manage to get by in our lives without worrying about it most of the time, why bother? Why not just collect stamps and leave the bedrock of our intuitions comfortably untouched?
For most people, this aversion to unlearning may not be so bad. Skillful action exceeds skillful knowledge, so, for most people we manage to get by okay even if our articulated theories of the world are out of sync with a deeper reality.
The main advantage, I see, of trying to get a deeper picture is that it helps climb out of local maxima. Theories can, to the extent they are accurate, shine a light on potential things we could do, change or experience that are outside what we’ve experienced directly before. Theories help us make predictions about whether those unseen places are good places to be or not.
A powerful algorithm for machine learning is gradient descent. It has a complex mathematical formulation involving vector calculus and partial derivatives, but the intuitive picture of what it is doing is quite simple to understand. Imagine yourself standing at the edge of a valley. Your goal is to get to the lowest possible spot you can. However, the terrain is quite complex, and you aren’t sure exactly what it looks like. What should you do?
The gradient descent algorithm is simple: go downhill. If you always walk in the direction of steepest decline, you’ll eventually reach a spot where every direction goes uphill again. This must be a low spot in the terrain.
The problem with gradient descent is that you can get stuck in little pockets where, to go further downhill, you must go uphill for awhile at first.
This is a computer analogy, but I believe that human learning methods for acquiring many practical skills through experience work in a similar way. We are pushed and pulled by our intuitions to reach a local maxima of “goodness” in how our lives could be. Although we aren’t always at this equilibrium, if our lives are relatively stable, we tend to return to it.
The problem with our lives is the same as with computers, however. Many people get “stuck” in local maxima. The person who is addicted to alcohol is in a local maxima. Drinking less causes pain, to make things better, they first have to feel worse.
Procrastination is a local maxima. Starting work first involves pushing through an unpleasant feeling about the task at hand. However, as anyone who procrastinates often knows, the state of procrastination isn’t particularly good, in an absolute sense. It feels awful, it’s just that any immediate action you anticipate makes you feel a little worse than that, so you stay stuck.
What’s the connection between unlearning and local maxima? Well one way you can get out of local maxima is if you have some notion of what the terrain is shaped like. If you know, for a fact, that you are sitting in a locally optimal, but globally awful, position, you can push against your intuitions and accept transitional badness in hopes of longer-term goodness.
Knowing what the terrain is shaped like, however, depends on having an accurate picture of the very facts and knowledge that are closest and most fundamental to your life right now. If those facts are wrong, your ability to make guesses about what places further from your immediate vicinity are actually like diminishes rapidly. Depending on how large the local maxima is that surrounds you, it may not be possible to see a better future when one does exist, or there may appear to be one which is actually a mirage.
In many ways, unlearning has the same properties of the local maxima problem for your overall life situation. To get a more accurate picture, you have to first sacrifice some certainty in the things you take for granted. This sacrifice involves going against your natural local-optimization inclinations.
Strangeness, Randomness and Unlearning
So far, I’ve spoken about one method for overcoming the local maxima problem: having a better theory of what unvisited places in the vast space of possible life experiences might be like. This helps spot genuine opportunities for improvement and avoids mirages of hope-inspiring, but ultimately illusory directions to follow.
Unlearning fits into this because, unlikely with the stamp collecting of purely additive learning, we all have pre-existing theories of what the terrain of nearby life spaces is already like.
Another method, however, for getting out of local maxima is simply randomness. Programmers often use some amount of random motion in their gradient descent algorithms. This randomness means that their solutions don’t snag on relatively insignificant dips.
Human beings can use randomness too to avoid the same problem. Exposing yourself to a larger variety of experiences can pull you out of temporary snags. The main disadvantage of this approach is that randomness can sometimes be destructive. Trying heroin, cheating on your spouse or joining a cult may all offer unique experiences, but their dangers may not be worth the payoff.
Unlearning, to me, proposes a relatively safer way of exploring larger swaths of the terrain of life possibilities. It may create a mental discomfort and instability, as you contend with the fact that many of the things you took for granted before may not be true. However, this is often a lot less dangerous than undirected randomness may have on your life.
How to Unlearn Things
How do you go about unlearning the things you think you know? This isn’t a trivial task. Simply throwing your hands up and admitting you know nothing may be a Zen kind of solution, but it doesn’t really offer a way forward to true knowledge. It simply admits ignorance of any theory for explaining the terrain, rather than trying to come up with more useful ones.
One way to begin unlearning is to seek additive knowledge in familiar areas and then use that new knowledge to start pulling up and modifying old knowledge. For me, learning about psychology and cognitive science often had this effect: I would start with a particular belief that seemed reasonable about myself, and then digging deeper, I would encounter careful arguments that showed why those beliefs were probably false. From that point of tension, I could start reworking some of my old beliefs.
This approach can work, but it’s difficult and it requires a lot more patience for theory and academic learning than most people have an appetite for. Another approach is to seek other people’s experiences of the world. Other people may not give you *the* theory for understanding the world, but the more diverse their experiences are from yours, the more likely they are situated in a different position in the space of life possibilities and how their lives differ from your expectations can itself give you information about your own thinking.
Travel, in this way, can be a potent form of unlearning. For me, the best travel experiences of my life haven’t been going to a place that exceeded my expectations, but going to ones which deeply undermined them. I’ve written about how going to China forced me to radically rethink that place. But talking to people in different places has also shown me how arbitrary many of my own culturally-specific views are of things.
This kind of travel means actually talking to people. Learning languages helps because you’re more likely to encounter people who differ from you more dramatically. The normal process of sightseeing and taking Instagram-worthy photos of famous landmarks is fine, but it’s stamp collecting, not acquiring model-altering insights.
A third approach to unlearning is to be more varied and bold in your experiments in life. Pure randomness can have a destructive quality to it. However, if you avoid obvious risks, many directions in life can be explored more thoroughly than most people do.
I think the main drawback of this third approach is that it depends on a kind of self-confidence that itself tends to depend on having had positive experiences venturing outside your safe, little local maxima in the past. Without confidence, people have an instinctive aversion to explore, and so this approach to getting out of life’s local maxima has a feedback component to it. The more successful your unlearning and exploration of life’s possibility space, the more likely you’ll take larger leaps on theory rather than direct experience alone.
Being Comfortable with Mystery
A good meta-belief to this whole unlearning endeavor is to be comfortable with the idea that everything you know is provisional, and that underneath what you know is likely a more complex and stranger picture.
Human beings seem to be naturally afraid of this groundless view of things. I’m not quite sure why that is. It may be that this kind of epistemic flexibility might start to question societal norms and rules of conduct, and so people who think too much about things may have an amoral character. That’s certainly the perspective of many traditional religious viewpoints on things, which discourages open-ended inquiry in favor of professing allegiance to dogma.
However, there’s probably a more basic level aversion to groundlessness rooted in a feeling that uncertainty is bad and that certainty is good. Like most aversions, however, I think this is something you can condition yourself to be comfortable with via exposure.
I used to be very afraid of heights. When I was a child, I had a hard time even going near the window if I was in a tall building. Sometime around my late teens, however, I started pushing myself to be exposed to more heights. First roller coasters, then ziplining and paragliding. Last year, I went skydiving for the first time and, although it was scary, I felt a lot less anxious than I used to feel with much less extreme exposures to heights.
Psychologists have known for some time that progressive exposure can remove many conditioned fears and aversions to things. Sometimes, if the exposure gets paired with a reward, something initially aversive can eventually become desirable as spicy-food eaters and adrenaline junkies can attest to.
Similarly, I think exposure to the unknown, to unlearning comfortable old beliefs about things, to the deeper mystery of things for which our current knowledge is only a temporary foothold, can be something that can switch from we shy away from to something you enjoy. The thrills of finding a new, more accurate, way of looking at things, start to eclipse the aversion to uprooting a previously stable way of thinking.