primitive neural circuit that was already in place for tool use in the brains of our early hominin ancestors.
Let’s take this a step further. Even the simplest type of opportunistic tool use, such as using a stone to crack open a coconut, involves an action—in this case, cracking (the verb)—performed by the right hand of the tool user (the subject) on the object held passively by the left hand (the object). If this basic sequence were already embedded in the neural circuitry for manual actions, it’s easy to see how it might have set the stage for the subject-verb-object sequence that is an important aspect of natural language.
In the next stage of hominin evolution, two amazing new abilities emerged that were destined to transform the course of human evolution. First was the ability to find, shape, and store a tool for future use, leading to our sense of planning and anticipation. Second—and especially important for subsequent language origin—was use of the subassembly technique in tool manufacture. Taking an axe head and hafting (tying) it to a long wooden handle to create a composite tool is one example. Another is hafting a small knife at an angle to a small pole and then tying this assembly to another pole to lengthen it so that fruits can be reached and yanked off trees. The wielding of a composite structure bears a tantalizing resemblance to the embedding of, say, a noun phrase within a longer sentence. I suggest that this isn’t just a superficial analogy. It’s entirely possible that the brain mechanism that implemented the hierarchical subassembly strategy in tool use became coopted for a totally novel function, the syntactic tree structure.
But if the tool-use subassembly mechanism were borrowed for aspects of syntax, then wouldn’t the tool-use skills deteriorate correspondingly as syntax evolved, given limited neural space in the brain? Not necessarily. A frequent occurrence in evolution is the duplication of preexisting body parts brought about by actual gene duplication. Just think of multisegmented worms, whose bodies are composed of repeating, semi-independent body sections, a bit like a chain of railroad cars. When such duplicated structures are harmless and not metabolically costly, they can endure many generations. And they can, under the right circumstances, provide the perfect opportunity for that duplicate structure to become specialized for a different function. This sort of thing has happened repeatedly in the evolution of the rest of the body, but its role in the evolution of brain mechanisms is not widely appreciated by psychologists. I suggest that an area very close to what we now call Broca’s area originally evolved in tandem with the IPL (especially the supramarginal portion) for the multimodal and hierarchical subassembly routines of tool use. There was a subsequent duplication of this ancestral area, and one of the two new subareas became further specialized for syntactic structure that is divorced from actual manipulation of physical objects in the world—in other words, it became Broca’s area. Add to this cocktail the influence of semantics, imported from Wernicke’s area, and aspects of abstraction from the angular gyrus, and you have a potent mix ready for the explosive development of full-fledged language. Not coincidentally, perhaps, these are the very areas in which mirror neurons abound.
Bear in mind that my argument thus far focuses on evolution and exaptation. Another question remains. Are the concepts of subassembly tool use, hierarchical tree structure of syntax (including recursion), and conceptual recursion mediated by separate modules in the brains of modern humans? How autonomous, really, are these modules in our brains? Would a patient with apraxia (the inability to mime the use of tools) caused by damage to the supramarginal gyrus also have problems with subassembly in tool use? We know that patients with Wernicke’s aphasia produce syntactically normal gibberish—the basis for suggesting that, at least in modern brains, syntax doesn’t depend on the recursive-ness of semantics or indeed of high-level embedding of concepts within concepts.3
But how syntactically normal is their gibberish? Does their speech—mediated entirely by Broca’s area on autopilot—really have the kinds of syntactic tree structure and recursion that characterize normal speech? If not, are we really justified in calling Broca’s area a “syntax box”? Can a Broca’s aphasic do algebra, given that algebra also requires recursion to some extent? In other words, does algebra piggyback on preexisting neural circuits that evolved for natural syntax? Earlier in this chapter I gave the example of a single patient with Broca’s aphasia who could do algebra, but there are precious few studies on these topics, each of which could generate a PhD thesis.
SO FAR I have taken you on an evolutionary journey that culminated in the emergence of two key human abilities: language and abstraction. But there is another feature of human uniqueness that has puzzled philosophers for centuries, namely, the link between language and sequential thinking, or reasoning in logical steps. Can we think without silent internal speech? We have already discussed language, but we need to be clear about what is meant by thinking before we try grappling with this question. Thinking involves, among other things, the ability to engage in open-ended symbol manipulation in your brain following certain rules. How closely are these rules related to those of syntax? The key phrase here is “open-ended.”
To understand this, think of a spider spinning a web and ask yourself, Does the spider have knowledge about Hooke’s law regarding the tension of stretched strings? The spider must “know” about this in some sense, otherwise the web would fall apart. Would it be more accurate to say that the spider’s brain has tacit, rather than explicit, knowledge of Hooke’s law? Although the spider behaves as though it knows this law—the very existence of the web attests to this—the spider’s brain (yes, it has one) has no explicit representation of it. It cannot use the law for any purpose other than weaving webs and, in fact, it can only weave webs according to a fixed motor sequence. This isn’t true of a human engineer who consciously deploys Hooke’s law, which she learned and understood from physics textbooks. The human’s deployment of the law is open-ended and flexible, available for an infinite number of applications. Unlike the spider he has an explicit representation of it in his mind—what we call understanding. Most of the knowledge of the world that we have falls in between these two extremes: the mindless knowledge of a spider and the abstract knowledge of the physicist.
What do we mean by “knowledge” or “understanding”? And how do billions of neurons achieve them? These are complete mysteries. Admittedly, cognitive neuroscientists are still very vague about the exact meaning of words like “understand,” “think,” and indeed the word “meaning” itself. But it is the business of science to find answers step by step through speculation and experiment. Can we approach some of these mysteries experimentally? For instance, what about the link between language and thinking? How might you experimentally explore the elusive interface between language and thought?
Common sense suggests that some of the activities regarded as thinking don’t require language. For example, I can ask you to fix a light-bulb on a ceiling and show you three wooden boxes lying on the floor. You would have the internal sense of juggling the visual images of the boxes—stacking them up in your mind’s eye to reach the bulb socket—before actually doing so. It certainly doesn’t feel like you are engaging in silent internal speech—“Let me stack box A on box B,” and so on. It feels as if we do this kind of thinking visually and not by using language. But we have to be careful with this deduction because introspection about what’s going in one’s head (stacking the three boxes) is not a reliable guide to what’s actually going on. It’s not inconceivable that what feels like the internal juggling of visual symbols actually taps into the same circuitry in the brain that mediates language, even though the task feels purely geometric or spatial. However much this seems to violate common sense, the activation of visual image–like representations may be incidental rather than causal.
Let’s leave visual imagery aside for the moment and ask the same question about the formal operations underlying logical thinking. We say, “If Joe is bigger than Sue, and if Sue is bigger than Rick, then Joe must be bigger than Rick.” You don’t have to conjure up mental images to realize that the deduction (“then Joe must be…”) follows from the two premises (“If Joe is…and if Sue is…”). It’s even easier to appreciate this if you substitute their names with abstract tokens like A, B, and C: If A > B and B > C, then it must be true that A > C. We also can intuit that if A > C and B > C, it doesn’t necessarily follow that A > B.
But where do these obvious deductions, based on the rules of transitivity, come from? Is it hardwired into your