Viking law, backgammon strategy, how consciousness works, and the history of alchemy. Here’s some of the summer reading I’ve been up to lately!
1. Consciousness and the Brain by Stanislas Dehaene
Dehaene is a leading neuroscientist working to understand the brain events that correspond with our conscious access to information.
This isn’t as mysterious as it sounds. If you flash an image to a person fast enough, they can’t explicitly recall seeing it. Yet those subliminal presentations still enable faster processing of the unseen stimulus when it is presented again. This effect, also known as priming, is just one of a few different strategies neuroscientists can use to provide the exact same stimulus but allow subjects’ conscious awareness to vary. By observing the difference in “aware” versus “unaware” trials, researchers can pin down what happens in the brain when we notice things.
Consciousness results from a cascade. Below-threshold stimuli get partly processed, but their effects are transient and quickly recede into the background. Conscious states, in contrast, get amplified and communicated across diverse areas of the brain, like water bursting from a dam.
2. Mental Models by Philip Johnson-Laird
How do people reason logically? This question has puzzled thinkers for centuries. One proposed answer is that the rules of logic are built into the brain.
Yet, if that were so, why are we so bad at formal deduction? Given that syllogistic reasoning has to be explicitly taught, the idea that there’s a built-in logical “grammar” seems unlikely.
Still, how is it that we do generally reason correctly about situations? Clearly, we’re not totally illogical, even if few of us are Vulcanesque in our reasoning abilities.
Johnson-Laird argues that we reason by setting up models of the questions we’re trying to answer. For example, rather than reasoning on logical sentences themselves, “All humans are mortal, Socrates is human, ergo Socrates is mortal,” we create a mental representation that has some group of people, all of whom are mortal. One of these people is Socrates, and we “see,” by inspection of this model, that Socrates is also mortal.
When we struggle to think logically, Johnson-Laird argues that it is usually because the situation described corresponds to multiple possible models. When this happens, we have to systematically generate and inspect each model. While doable, this is more difficult, and we’re more likely to miss an edge case.
3. The Algebraic Mind by Gary Marcus
Gary Marcus thinks current machine learning algorithms will not lead to human-like intelligence. Coincidentally, I read this book shortly before Marcus and Scott Alexander engaged in a spirited discussion on recent advances in AI.
As I understand it, Marcus’s point is that we know from cognitive psychology that humans can think in terms of abstract rules. For instance, English-speaking toddlers quickly learn that you can add “-s” to most words to make them plural or “-ed” to verbs to use the past tense. This ability is readily generalized beyond examples the child has seen and requires far less input than huge language models.
Neural networks often struggle with abstract rules like this and instead work like a giant lookup table. They can retrieve the “right” answer for given inputs but struggle to extrapolate rules based on past experience. Marcus argues that developing this ability for abstraction in networks will be essential to creating human-like cognition.
4. The Secrets of Alchemy by Lawrence Principe
Reading a book on a failed science may seem like a waste of time. Weren’t alchemists just a bunch of mystics, cranks and crooks? Principe argues persuasively that we ought to take the alchemists more seriously.
Robert Boyle, the father of chemistry, was an alchemist. So was astronomer Tycho Brahe. Isaac Newton spent more time on alchemy than he did on physics. The line between alchemy and chemistry was blurry in the early days of the Scientific Revolution.
I found the discussion of how alchemists communicated, through riddles and allegories, to be fascinating. Compared with our modern, scientific norms of transparent communication and replicable experiments, alchemy seems almost tailor-made to allow misinformation to propagate.
5. The Idea Factory by Jon Gertner
Bell Telephone Laboratories was perhaps the most inventive place to have ever existed. Transistors, cell phones, lasers, fiber optics—and even the theory of information itself—were developed there.
What made the Labs so successful? AT&T was a regulated monopoly. Thus, it faced limited corporate competition, was flush with both cash and constant technical problems, and needed to maintain a do-gooder image to avoid the ire of anti-trust authorities. These factors enabled Bell Labs to employ enormous quantities of engineers and scientists, allow them to work on basic science rather than projects immediately related to turning quarterly profits, and let the early innovations developed there to diffuse widely.
I haven’t seen any comparative studies, but the Bell Labs model seems fundamentally different from the university or government-sponsored models or the industrial labs in the Silicon Valley ecosystem it helped launch.
6. The Math Myth by Andrew Hacker
How important is learning math? Particularly the higher mathematics used by engineers, mathematicians and scientists? Should algebra and calculus be prerequisites for students in fields where they will likely never apply them?
Hacker voices skepticism that universal STEM mastery is what’s needed to educate tomorrow’s workforce. Most people don’t use higher math in their jobs. The decisions about what math we need to teach are generally thrust upon students from a coterie of mathematical elite.
I am sympathetic to Hacker’s view. I believe that knowledge and skills need to be used to be useful. Thus, the idea that people who will never calculate an integral in their professional life must receive high grades in a calculus class to get a degree and then a job seems perverse to me.
In an ideal world, researchers would perform a detailed cognitive task analysis to identify the knowledge and intellectual skills used in a wide variety of professions, avocations, and civic responsibilities. We could then see which skills are the most generally useful and focus on teaching those first, saving the more niche subjects for those who need them for their future specialty or find them interesting.
Instead, it appears to me that broad curricular decisions are made informally. Sometimes this means highly transferrable skills like reading and writing are prioritized. Or foundational knowledge, like what a gene is or how gravity works, is imparted. Yet, equally often, it seems like curricula are picked based on what the highest status people know and love. Math is no exception in this regard.
7. The Creative Vision by Mihaly Csikszentmihalyi and Jacob Getzels
Csikszentmihalyi and Getzels did a longitudinal study of art students. They followed students after graduation and tracked their success throughout their careers. They found that “problem finding,” or the effort spent in finding original problems to ply their craft toward, was correlated with later becoming a successful artist.
My favorite tidbit from this book was that when they asked non-artistic professionals to rate art, the ratings were relatively consistent. Yet, when they asked art experts to rate art, the ratings were all over the place. This seems inconsistent with our usual model of expertise, whereby experience increases experts’ ability to identify higher quality consistently.
8. Complex Problem Solving edited by Robert Sternberg and Peter Frensch
My favorite chapter from this book was by Mary Bryson on problem solving in writing. She observes that, typically, problem solving becomes routine as a person gains more experience. Doctors, programmers, and car mechanics all switch from the deliberate, problem solving process to a fluent, automatic approach to skills with increased practice.
In contrast, writers experience the opposite trajectory. Grade-school writers produce with remarkable fluency. They string together sentences, even though their work is often bad. In comparison, great writers experience notorious bouts of writer’s block as they struggle to produce prose. What’s going on here?
Bryson suggests that the issue is that a writing task can be conceived of as very different problems. Third graders see the problem of writing as “knowledge telling” or simply writing everything you know about a topic until you’ve exhausted that knowledge. Given the same writing prompt, better writers view the problem as one of persuasion, organization, and teaching—all of which are much harder problems to grapple with than simple telling.
I wholeheartedly agree with this assessment. For me, writing today is much, much harder than it was when I started. Yet my early writing was really bad. I think the issue is simply that, as I’ve read and written more, I’ve become much more sensitive to good writing and increasingly aware of how my work often fails to live up to that standard.
9. Seven Games by Oliver Roeder
Checkers, chess, Go, poker, backgammon, Scrabble, and contract bridge, the history of seven games told through the lens of efforts to understand their computational structure.
The book covers the evolution of the games in order of complexity. Checkers, with only a single piece of each type and a simple set of moves, was “solved” not that long ago, and now a perfect strategy is known. This is not so for the other games, each of which introduces new complexity: hidden information, randomness, time dependency and cooperation.
I enjoyed this book, even though many of the topics were familiar to me. A good read if you’re interested in how games work.
10. Legal Systems Very Different From Ours by David Friedman, Peter Leeson, and David Skarbek
How did early Icelanders maintain society without rulers or police? What keeps the Amish society stable—yet separate from American influence? How did pirates enforce order among outlaws? In this fascinating survey, the authors explore what the law looks like when you don’t have police, courts, or even a legal code to uphold.
Friedman and colleagues do a good job presenting alien-seeming legal traditions as rational solutions to coordination problems in different societies. Far from seeing practices like blood money or feuds as barbarism, they argue these practices represent ingenious solutions that work reasonably well in practice.
Feud systems, embodied by the dictum “an eye for an eye,” seem like they would devolve into endless repetitions of revenge. But in practice, Friedman argues, they usually avoided further violence by ensuring fair compensation and thus resolution of conflict. If I started a fight with your brother and took out his eye, I had to pay you an amount so that you’d prefer the money to taking *my* eye. This practice allowed enforcement within clans and close family, who could monitor their members closely, and prevented violence from escalating by allowing ritualized compensation.