I’m finishing up a longer post incorporating some of the many useful and interesting suggestions readers shared about how they use ChatGPT to learn languages, programming, mathematics and more. However, before I share that post, I’d like to address a different question.
One question a few readers have is whether ChatGPT will eliminate the need to learn things. After all, if you can easily ask the machine a question and it will spit out an expert-like answer, what’s the point of getting students to write essays, memorize facts or try to remember a lot of knowledge?
Like everyone, I’m gobsmacked by the performance of large language models like ChatGPT and other forms of generative AI. That said, it’s hard to predict things, especially about the future. And I suspect the evolution of new AI tools is harder to predict than most things. We have poor intuitions about what sorts of problems machines can solve, and so the general discourse around AI as a field tends to swing between wild pessimism and optimism as we learn more about the path ahead.
One thing I am reasonably confident predicting, however, is that LLMs won’t make learning obsolete.
Why Learning Still Matters
One reason I can feel confident about the continued value of learning is that while ChatGPT is novel, the kind of technological change it has brought about is not.
Humans have been inventing technology that has replaced aspects of cognitive work for thousands of years. In every case, the result of that substitution was not a diminishing of the need for skills and knowledge—but an increase.
Consider the impact of writing. Human knowledge used to be transmitted orally. When writing became available, thinkers such as Socrates decried the damage it would do to people’s memory. We no longer need to memorize speeches if the full text is written down.
Or consider calculators, which have been widespread for decades. You probably have one on your smartphone. While they certainly eliminate the need to learn some skills (I have no idea how to use a slide rule, for instance), we still have to understand arithmetic to use a calculator to get the answer we seek, even if the machine does the actual computation.
A robust finding in educational psychology is the importance of background knowledge. People who know more about a subject can understand and retain more of what they read. Knowledge outside your head doesn’t count, since it can’t participate in learning and reasoning.
Similarly, even if you use a calculator all the time, you understand math better when you first learn to do it by hand. Working through low-level calculations creates conceptual and procedural skills that build an intuitive sense for the underlying algorithms. While you can save effort by using a calculator, the intuition you gained from your initial learning helps you recognize if the calculation is correct and how to fix things when they go wrong.
The ability to look something up is not a substitute for having knowledge in your head.
Will ChatGPT Change What Knowledge Matters?
So perhaps we can safely reject the extreme position that learning will become obsolete. Won’t LLMs change what kind of knowledge is worth learning?
This is probably the case. But it’s not obvious to me what the transition will be, and I don’t think most people’s intuitions on this are correct either.
The gut reaction seems to be that since large language models can easily spit out an explanation to any question answerable within the sphere of well-documented human knowledge, LLMs devalue this kind of book learning.
Even putting aside the hallucinations and failings of current LLMs and assuming engineers perfect the technology to the point where it rarely gets these questions wrong, I’m not sure this intuition is correct.
As mentioned in the research on background knowledge, your ability to ask a useful question and then understand an LLM’s answer depends heavily on the knowledge you already possess. For instance, economist and polymath Tyler Cowen argues that asking an economics question to ChatGPT goes better if you ask it to answer in the style of Milton Friedman or Paul Krugman. Your own economics knowledge allows you to extract knowledge from the LLM more accurately.
This isn’t a new phenomenon. If the answers to all the world’s questions were already on the Internet, but you didn’t know where to look, they’d be invisible to you.
Contrary to the supposition that book learning will be less relevant, I suspect that it will be more important. The people who get the most programming help from ChatGPT are existing programmers—since their knowledge allows them to ask well-formulated programming questions and accurately assess the results. LLMs may accelerate the need for knowledge, since the combination of [knowledgeable person + ChatGPT] will outperform [ignorant person + ChatGPT] by a wider margin than exists between the two people on their own.
What Subjects/Skills are Still Worth Learning?
A few people wrote to me wondering whether it even makes sense to learn second languages, programming, creative writing or art, given the ever-increasing powers of these applications.
My feeling is that, on the whole, the value of these skills hasn’t changed much. What will probably change is the differential value of mastery versus novice levels of various skills. There may be fewer professional opportunities available for mediocre programmers, writers, artists or translators. This kind of skill polarization has been going on for a long time, as I documented in my book Ultralearning. ChatGPT is just a continuation of current trends.
But just as there’s a diminished value for someone who can crank out stock art or kludge together a website, there’s probably increased value for those sophisticated enough to use these tools in collaboration with their professional expertise.
What if we get genuinely general artificial intelligence? An application so advanced it can perform any cognitive task better than a person, or indeed, better than a person in collaboration with a machine? Chess seems to provide an illustrative example, with top chess engines picking better moves than top players—even those who have access to top chess engines.
In the extreme case, all bets are off. Perhaps we’ll live in a utopian future where machines do all the work and people do cognitive activities for leisure or showing off, much like gardening is a hobby, not agriculture. Or perhaps we’ll all get turned into paperclips. Who knows?
But, despite the uncertain long-term forecast, I’m fairly confident that the short-to-medium-term impact of tools like ChatGPT will not be to devalue learning complex skills and knowledge, but will actually increase the value of these activities. As with other technological advances that have come before, it will be the quick and enthusiastic learners who reap the benefits.