articulatory choreography, the only way to keep up the speed of communication was to cut corners. Rather than produce every phoneme as a separate, distinct element (as a simple computer modem would), our speech system starts preparing sound number two while it’s still working on sound number one. Thus, before I start uttering the h in happy, my tongue is already scrambling into position in anticipation of the a. When I’m working on a, my lips are already getting ready for the pp, and when I’m on pp, I’m moving my tongue in preparation for the y

This dance keeps the speed up, but it requires a lot of practice and can complicate the interpretation of the message.[31] What’s good for muscle control isn’t necessarily good for a listener. If you should mishear John Fogerty’s “There’s a bad moon on the rise” as “There’s a bathroom on the right,” so be it. From the perspective of evolution, the speech system, which works most of the time, is good enough, and that’s all that matters.

Curmudgeons of every generation think that their children and grandchildren don’t speak properly. Ogden Nash put it this way in 1962, in “Laments for a Dying Language”:

Coin brassy words at will, debase the coinage; We’re in an if-you-cannot-lick-them-join age, A slovenliness provides its own excuse age, Where usage overnight condones misusage. Farewell, farewell to my beloved language, Once English, now a vile orangutanguage.

Words in computer languages are fixed in meaning, but words in human languages change constantly; one generation’s bad means “bad,” and the next generation’s bad means “good.” Why is it that languages can change so quickly over time?

Part of the answer stems from how our prelinguistic ancestors evolved to think about the world: not as philosophers or mathematicians, brimming with precision, but as animals perpetually in a hurry, frequently settling for solutions that are “good enough” rather than definitive.

Take, for example, what might happen if you were walking through the Redwood Forest and saw a tree trunk; odds are, you would conclude that you were looking at a tree, even if that trunk happened to be so tall that you couldn’t make out any leaves above. This habit of making snap judgments based on incomplete evidence (no leaves, no roots, just a trunk, and still we conclude we’ve seen a tree) is something we might call a logic of “partial matching.”

The logical antithesis, of course, would be to wait until we’d seen the whole thing; call that a logic of “full matching.” As you can imagine, he who waits until he’s seen the whole tree would never be wrong, but also risks missing a lot of bona fide foliage. Evolution rewarded those who were swift to decide, not those who were too persnickety to act.

For better or worse, language inherited this system wholesale. You might think of a chair, for instance, as something with four legs, a back, and a horizontal surface for sitting. But as the philosopher Ludwig Wittgenstein (1889-1951) realized, few concepts are really defined with such precision. Beanbag chairs, for example, are still considered chairs, even though they have neither an articulated back nor any sort of legs.

I call my cup of water a glass even though it’s made of plastic; I call my boss the chair of my department even though so far as I can tell she merely sits in one. A linguist or phylogenist uses the word tree to refer to a diagram on a page simply because it has branching structures, not because it grows, reproduces, or photosynthesizes. A head is the topside of a penny, the tail the bottom, even though the top has no more than a picture of a head, the bottom not a fiber of a wagging tail. Even the slightest fiber of connection suffices, precisely because words are governed by an inherited, ancestral logic of partial matches.[32]

Another idiosyncrasy of language, considerably more subtle, has to do with words like some, every, and most, known to linguists as “quantifiers” because they quantify, answering questions like “How much?” and “How many?”: some water, every boy, most ideas, several movies.

The peculiar thing is that in addition to quantifiers, we have another whole system that does something similar. This second system traffics in what linguists call “generics,” somewhat vague, generally accurate statements, such as Dogs have four legs or Paperbacks are cheaper than hardcovers. A perfect language might stick only to the first system, using explicit quantifiers rather than generics. An explicitly quantified sentence such as Every dog has four legs makes a nice, strong, clear statement, promising no exceptions. We know how to figure out whether it is true. Either all the dogs in the world have four legs, in which case the sentence is true, or at least one dog lacks four legs, in which case the sentence is false — end of story. Even a quantifier like some is fairly clear in its application; some has to mean more than one, and (pragmatically) ought not to mean every.

Generics are a whole different ball game, in many ways much less precise than quantifiers. It’s just not clear how many dogs have to have four legs before the statement Dogs have four legs can be considered true, and how many dogs would have to exhibit three legs before we’d decide that the statement is false. As for Paperbacks are cheaper than hardcovers, most of us would accept the sentence as true as a general rule of thumb, even if we knew that lots of individual paperbacks (say, imports) are more expensive than many individual hardcovers (such as discounted bestsellers printed in large quantities). We agree with the statement Mosquitoes carry the West Nile virus, even if only (say) 1 percent of mosquitoes carry the virus, yet we wouldn’t accept the statement Dogs have spots even if all the dalmatians in the world did.

Computer-programming languages admit no such imprecision; they have ways of representing formal quantifiers ([DO THIS THING REPEATEDLY UNTIL EVERY DATABASE RECORD HAS BEEN EXAMINED] ) but no way of expressing generics at all. Human languages are idiosyncratic — and verging on redundant — inasmuch as they routinely exploit both systems, generics and the more formal quantifiers.

Why do we have both systems? Sarah-Jane Leslie, a young Princeton philosopher, has suggested one possible answer. The split between generics and quantifiers may reflect the divide in our reasoning capacity, between a sort of fast, automatic system on the one hand and a more formal, deliberative system on the other. Formal quantifiers rely on our deliberative system (which, when we are being careful, allows us to reason logically), while generics draw on our ancestral reflexive system. Generics are, she argues, essentially a linguistic realization of our older, less formal cognitive systems. Intriguingly, our sense of generics is “loose” in a second way: we are prepared to accept as true generics like Sharks attack bathers or Pit bulls maul children even though the circumstances they describe are statistically very rare, provided that they are vivid or salient — just the kind of response we might expect from our automatic, less deliberative system.

Leslie further suggests that generics seem to be learned first in childhood, before formal quantifiers; moreover, they may have emerged earlier in the development of language. At least one contemporary language (Piraha, spoken in the Amazon Basin) appears to employ generics but not formal quantifiers. All of this suggests one more way in which the particular details of human languages depend on the idiosyncrasies of how our mind evolved.

For all that, I doubt many linguists would be convinced that language is truly a kluge. Words are one thing, sentences another; even if words are clumsy, what linguists really want to know about is syntax, the glue that binds words together. Could it be that words are a mess, but grammar is different, a “near-perfect” or “optimal” system for connecting sound and meaning?

In the past several years, Noam Chomsky, the founder and leader of modern linguistics, has taken to arguing just that. In particular, Chomsky has wondered aloud whether language (by which he means mainly the syntax of sentences) might come close “to what some super-engineer would construct, given the conditions that the language faculty must satisfy.” As linguists like Tom Wasow and Shalom Lappin have pointed out, there is

Добавить отзыв
ВСЕ ОТЗЫВЫ О КНИГЕ В ИЗБРАННОЕ

0

Вы можете отметить интересные вам фрагменты текста, которые будут доступны по уникальной ссылке в адресной строке браузера.

Отметить Добавить цитату