extract) you could make the axolotl revert back into the extinct, land-dwelling, gill-less adult ancestor that it had evolved from. You could go back in time, resurrecting a prehistoric animal that no longer exists anywhere on Earth. I also knew that for some mysterious reason adult salamanders don’t regenerate amputated legs but the tadpoles do. My curiosity took me one step further, to the question of whether an axolotl—which is, after all, an “adult tadpole”—would retain its ability to regenerate a lost leg just as a modern frog tadpole does. And how many other axolotl-like beings exist on Earth, I wondered, that could be restored to their ancestral forms by simply giving them hormones? Could humans—who are after all apes that have evolved to retain many juvenile qualities—be made to revert to an ancestral form, perhaps something resembling Homo erectus, using the appropriate cocktail of hormones? My mind reeled out a stream of questions and speculations, and I was hooked on biology forever.

I found mysteries and possibilities everywhere. When I was eighteen, I read a footnote in some obscure medical tome that when a person with a sarcoma, a malignant cancer that affects soft tissues, develops high fever from an infection, the cancer sometimes goes into complete remission. Cancer shrinking as a result of fever? Why? What could explain it, and might it just possibly lead to a practical cancer therapy?1 I was enthralled by the possibility of such odd, unexpected connections, and I learned an important lesson: Never take the obvious for granted. Once upon a time, it was so obvious that a four-pound rock would plummet earthward twice as fast as a two-pound rock that no one ever bothered to test it. That is, until Galileo Galilei came along and took ten minutes to perform an elegantly simple experiment that yielded a counterintuitive result and changed the course of history.

I had a boyhood infatuation with botany too. I remember wondering how I might get ahold of my own Venus flytrap, which Darwin had called “the most wonderful plant in the world.” He had shown that it closes shut when you touch two hairs inside its trap in rapid succession. The double trigger makes it much more likely that it will be responding to the motions of insects as opposed to inanimate detritus falling or drifting in at random. Once it has clamped down on its prey, the plant stays shut and secretes digestive enzymes, but only if it has caught actual food. I was curious. What defines food? Will it stay shut for amino acids? Fatty acid? Which acids? Starch? Pure sugar? Saccharin? How sophisticated are the food detectors in its digestive system? Too bad, I never did manage to acquire one as a pet at that time.

My mother actively encouraged my early interest in science, bringing me zoological specimens from all over the world. I remember particularly well the time she gave me a tiny dried seahorse. My father also approved of my obsessions. He bought me a Carl Zeiss research microscope when I was still in my early teens. Few things could match the joy of looking at paramecia and volvox through a high-power objective lens. (Volvox, I learned, is the only biological creature on the planet that actually has a wheel.) Later, when I headed off to university, I told my father my heart was set on basic science. Nothing else stimulated my mind half as much. Wise man that he was, he persuaded me to study medicine. “You can become a second-rate doctor and still make a decent living,” he said, “but you can’t be second-rate scientist; it’s an oxymoron.” He pointed out that if I studied medicine I could play it safe, keeping both doors open and decide after graduation whether I was cut out for research or not.

All my arcane boyhood pursuits had what I consider to be a pleasantly antiquated, Victorian flavor. The Victorian era ended over a century ago (technically in 1901) and might seem remote from twenty-first-century neuroscience. But I feel compelled to mention my early romance with nineteenth-century science because it was a formative influence on my style of thinking and conducting research.

Simply put, this “style” emphasizes conceptually simple and easy-to-do experiments. As a student I read voraciously, not only about modern biology but also about the history of science. I remember reading about Michael Faraday, the lower-class, self-educated man who discovered the principle of electromagnetism. In the early 1800s he placed a bar magnet behind a sheet of paper and threw iron filings on the sheet. The filings instantly aligned themselves into arcing lines. He had rendered the magnetic field visible! This was about as direct a demonstration as possible that such fields are real and not just mathematical abstractions. Next Faraday moved a bar magnet to and fro through a coil of copper wire, and lo and behold, an electric current started running through the coil. He had demonstrated a link between two entirely separate areas of physics: magnetism and electricity. This paved the way not only for practical applications—such as hydroelectric power, electric motors, and electromagnets—but also for the deep theoretical insights of James Clerk Maxwell. With nothing more than bar magnets, paper, and copper wire, Faraday had ushered in a new era in physics.

I remember being struck by the simplicity and elegance of these experiments. Any schoolboy or -girl can repeat them. It was not unlike Galileo dropping his rocks, or Newton using two prisms to explore the nature of light. For better or worse, stories like these made me a technophobe early in life. I still find it hard to use an iPhone, but my technophobia has served me well in other respects. Some colleagues have warned me that this phobia might have been okay in the nineteenth century when biology and physics were in their infancy, but not in this era of “big science,” in which major advances can only be made by large teams employing high-tech machines. I disagree. And even if it is partly true, “small science” is much more fun and can often turn up big discoveries. It still tickles me that my early experiments with phantom limbs (see Chapter 1) required nothing more than Q-tips, glasses of warm and cold water, and ordinary mirrors. Hippocrates, Sushruta, my ancestral sage Bharadwaja, or any other physicians between ancient times and the present could have performed these same basic experiments. Yet no one did.

Or consider Barry Marshall’s research showing that ulcers are caused by bacteria—not acid or stress, as every doctor “knew.” In a heroic experiment to convince skeptics of his theory, he actually swallowed a culture of the bacterium Helicobacter pylori and showed that his stomach lining became studded with painful ulcers, which he promptly cured by consuming antibiotics. He and others later went on to show that many other disorders, including stomach cancer and even heart attacks, might be triggered by microorganisms. In just a few weeks, using materials and methods that had been available for decades, Dr. Marshall had ushered in a whole new era of medicine. Ten years later he won a Nobel Prize.

My preference for low-tech methods has both strengths and drawbacks, of course. I enjoy it—partly because I’m lazy—but it isn’t everyone’s cup of tea. And this is a good thing. Science needs a variety of styles and approaches. Most individual researchers need to specialize, but the scientific enterprise as a whole is made more robust when scientists march to different drumbeats. Homogeneity breeds weakness: theoretical blind spots, stale paradigms, an echo-chamber mentality, and cults of personality. A diverse dramatis personae is a powerful tonic against these ailments. Science benefits from its inclusion of the abstraction-addled, absent-minded professors, the control-freak obsessives, the cantankerous bean-counting statistics junkies, the congenitally contrarian devil’s advocates, the hard-nosed data-oriented literalists, and the starry-eyed romantics who embark on high-risk, high- payoff ventures, stumbling frequently along the way. If every scientist were like me, there would be no one to clear the brush or demand periodic reality checks. But if every scientist were a brush-clearing, never-stray-beyond- established-fact type, science would advance at a snail’s pace and would have a hard time unpainting itself out of corners. Getting trapped in narrow cul-de-sac specializations and “clubs” whose membership is open only to those who congratulate and fund each other is an occupational hazard in modern science.

When I say I prefer Q-tips and mirrors to brain scanners and gene sequencers, I don’t mean to give you the impression that I eschew technology entirely. (Just think of doing biology without a microscope!) I may be a technophobe, but I’m no Luddite. My point is that science should be question driven, not methodology driven. When your department has spent millions of dollars on a state-of-the-art liquid-helium-cooled brain-imaging machine, you come under pressure to use it all the time. As the old saying goes, “When the only tool you have is a hammer, everything starts to look like a nail.” But I have nothing against high-tech brain scanners (nor against hammers). Indeed, there is so much brain imaging going on these days that some significant discoveries are bound to be made, if only by accident. One could justifiably argue that the modern toolbox of state-of-the-art gizmos has a vital and indispensable place in research. And indeed, my low-tech-leaning colleagues and I often do take advantage of brain

Вы читаете The Tell-Tale Brain
Добавить отзыв
ВСЕ ОТЗЫВЫ О КНИГЕ В ИЗБРАННОЕ

0

Вы можете отметить интересные вам фрагменты текста, которые будут доступны по уникальной ссылке в адресной строке браузера.

Отметить Добавить цитату
×