that people’s memory of their own feelings shifted. Those who returned to Perot when Perot reentered the race tended to whitewash their negative memories of his withdrawal, forgetting how betrayed they had felt, while people who moved on from Perot and ultimately voted for another candidate whitewashed their positive memories of him, as if they had never intended to vote for him in the first place. Orwell would have been proud.[11]

Distortion and interference are just the tip of the iceberg. Any number of things would be a whole lot easier if evolution had simply vested us with postal-code memory. Take, for example, the seemingly trivial task of remembering where you last put your house keys. Nine times out of ten you may get it right, but if you should leave your keys in an atypical spot, all bets are off. An engineer would simply assign a particular memory location (known as a “buffer”) to the geographical coordinates of your keys, update the value whenever you moved them, and voila: you would never need to search the pockets of the pants you wore yesterday or find yourself locked out of your own home.

Alas, precisely because we can’t access memories by exact location, we can’t straightforwardly update specific memories, and we can’t readily “erase” information about where we put our keys in the past. When we place them somewhere other than their usual spot, recency (their most recent location) and frequency (where they’re usually placed) come into conflict, and we may well forget where the keys are. The same problem crops up when we try to remember where we last put our car, our wallet, our phone; it’s simply part of human life. Lacking proper buffers, our memory banks are a bit like a shoebox full of disorganized photographs: recent photos tend on average to be closer to the top, but this is not guaranteed. This shoebox-like system is fine when we want to remember some general concept (say, reliable locations for obtaining food) — in which case, remembering any experience, be it from yesterday or a year ago, might do. But it’s a lousy system for remembering particular, precise bits of information.

The same sort of conflict between recency and frequency explains the near-universal human experience of leaving work with the intention of buying groceries, only to wind up at home, having completely forgotten to stop at the grocery store. The behavior that is common practice (driving home) trumps the recent goal (our spouse’s request that we pick up some milk).

Preventing this sort of cognitive autopilot should have been easy. As any properly trained computer scientist will tell you, driving home and getting groceries are goals, and goals belong on a stack. A computer does one thing, then a user presses a key and the first goal (analogous to driving home) is temporarily interrupted by a new goal (getting groceries); the new goal is placed on top of the stack (it becomes top priority), until, when it is completed, it is removed from the stack, returning the old goal to the top. Any number of goals can then be pursued in precisely the right priority sequence. No such luck for us human beings.

Or consider another common quirk of human memory: the fact that our memory for what happened is rarely matched by memory for when it occurred. Whereas computers and videotapes can pinpoint events to the second (when a particular movie was recorded or particular file was modified), we’re often lucky if we can guess the year in which something happened, even if, say, it was in the headlines for months. Most people my age, for example, were inundated a few years ago with a rather sordid story involving two Olympic figure skaters; the ex-husband of one skater hired a goon to whack the other skater on the knee, in order to ruin the latter skater’s chance at a medal. It’s just the sort of thing the media love, and for nearly six months the story was unavoidable. But if today I asked the average person when it happened, I suspect he or she would have difficulty recalling the year, let alone the specific month.[12]

For something that happened fairly recently, we can get around the problem by using a simple rule of thumb: the more recent the event, the more vivid the memory. But this vividness has its limits: events that have receded more than a couple of months into the past tend to blur together, frequently leaving us chronologically challenged. For example, when regular viewers of the weekly TV news program 60 Minutes were asked to recall when a series of stories aired, viewers could readily distinguish a story presented two months earlier from a story shown only a week before. But stories presented further in the past — say, two years versus four — all faded into an indistinct muddle.

Of course, there is always another workaround. Instead of simply trying to recall when something happened, we can try to infer this information. By a process known as “reconstruction,” we work backward, correlating an event of uncertain date with chronological landmarks that we’re sure of. To take another example ripped from the headlines, if I asked you to name the year in which O. J. Simpson was tried for murder, you’d probably have to guesstimate. As vivid as the proceedings were then, they are now (for me, anyway) beginning to get a bit hazy. Unless you are a trivia buff, you probably can’t remember exactly when the trial happened. Instead, you might reason that it took place before the Monica Lewinsky scandal but after Bill Clinton took office, or that it occurred before you met your significant other but after you went to college. Reconstruction is, to be sure, better than nothing, but compared to a simple time/date stamp, it’s incredibly clumsy.

A kindred problem is reminiscent of the sixth question every reporter must ask. Not who, what, when, where, or why, but how, as in How do I know it? What are my sources? Where did I see that somewhat frightening article about the Bush administration’s desire to invade Iran? Was it in The New Yorker? Or the Economist? Or was it just some paranoid but entertaining blog? For obvious reasons, cognitive psychologists call this sort of memory “source memory.” And source memory, like our memory for times and dates, is, for want of a proper postal code, often remarkably poor. One psychologist, for example, asked a group of test subjects to read aloud a list of random names (such as Sebastian Weisdorf). Twenty-four hours later he asked them to read a second list of names and to identify which ones belonged to famous people and which didn’t. Some were in fact the names of celebrities, and some were made up; the interesting thing is that some were made-up names drawn from the first list. If people had good source memory, they would have spotted the ruse. Instead, most subjects knew they had seen a particular name before, but they had no idea where. Recognizing a name like Sebastian Weisdorf but not recalling where they’d seen it, people mistook Weisdorf for the name of a bona fide celebrity whom they just couldn’t place. The same thing happens, with bigger stakes, when voters forget whether they heard some political rumor on Letterman or read it in the New York Times.

The workaround by which we “reconstruct” memory for dates and times is but one example of the many clumsy techniques that humans use to cope with the lack of postal-code memory. If you Google for “memory tricks,” you’ll find dozens more.

Take for example, the ancient “method of loci.” If you have a long list of words to remember, you can associate each one with a specific room in a familiar large building: the first word with the vestibule, the second word with the living room, the third word with the dining room, the fourth with the kitchen, and so forth. This trick, which is used in adapted form by all the world’s leading mnemonists, works pretty well, since each room provides a different context for memory retrieval — but it’s still little more than a Band-Aid, one more solution we shouldn’t need in the first place.

Another classical approach, so prominent in rap music, is to use rhyme and meter as an aid to memorization. Homer had his hexameter, Tom Lehrer had his song “The Elements” (“There’s antimony, arsenic, aluminum, selenium, / And hydrogen and oxygen and nitrogen and rhenium…”), and the band They Might Be Giants have their cover of “Why Does the Sun Shine? (The Sun Is a Mass of Incandescent Gas).”

Actors often take these mnemonic devices one step further. Not only do they remind themselves of their next lines by using cues of rhythm, syntax, and rhyme; they also focus on their character’s motivations and actions, as well as those of other characters. Ideally, this takes place automatically. In the words of the actor Michael Caine, the goal is to get immersed in the story, rather than worry about specific lines. “You must be able to stand there not thinking of that line. You take it off the other actor’s face.” Some performers can do this rather well; others struggle with it (or rely on cue cards). The point is, memorizing lines will never be as easy for us as it would be for a computer. We retrieve memorized information not by reading files from a specific sector of the hard drive but by cobbling together as many clues as possible — and hoping for the best.

Even the oldest standby — simple rehearsal, repeating something over and over — is a bit of clumsiness that shouldn’t be necessary. Rote memorization works somewhat well because it exploits the brain’s attachment to memories based on frequently occurring events, but here too the solution is hardly elegant. An ideal memory

Добавить отзыв
ВСЕ ОТЗЫВЫ О КНИГЕ В ИЗБРАННОЕ

0

Вы можете отметить интересные вам фрагменты текста, которые будут доступны по уникальной ссылке в адресной строке браузера.

Отметить Добавить цитату
×