continuance of the Skylab mission.
But all these comparisons turn out, naturally enough, to have been written by humans. I wonder a small self- congratulatory element, a whiff of human chauvinism, has not crept into these judgments. Just as whites can sometimes detect racism and men can occasionally discern sexism, I wonder whether we cannot here glimpse some comparable affliction of the human spirit-a disease that as yet has no name. The word “anthropocentrism” does not mean quite the same thing. The word “humanism” has been pre-empted by other and more benign activities of our kind. From the analogy with sexism and racism I suppose the name for this malady is “speciesism”-the prejudice that there are no beings so fine, so capable, so reliable as human beings.
This is a prejudice because it is, at the very least, a prejudgment, a conclusion drawn before all the facts are in. Such comparisons of men and machines in space are comparisons of smart men and dumb machines. We have not asked what sorts of machines could have been built for the $30-or-so billion that the Apollo and Skylab missions cost.
Each human being is a superbly constructed, astonishingly compact, self-ambulatory computer-capable on occasion of independent decision making and real control of his or her environment. And, as the old joke goes, these computers can be constructed by unskilled labor. But there are serious limitations to employing human beings in certain environments. Without a great deal deal of protection, human beings would be inconvenienced on the ocean floor, the surface of Venus, the deep interior of Jupiter, or even on long space missions. Perhaps the only interesting results of Skylab that could not have been obtained by machines is that human beings in space for a period of months undergo a serious loss of bone calcium and phosphorus-which seems to imply that human beings may be incapacitated under 0 g for missions of six to nine months or longer. But the minimum interplanetary voyages have characteristic times of a year or two. Because we value human beings highly, we are reluctant to send them on very risky missions. If we do send human beings to exotic environments, we must also send along their food, their air, their water, amenities for entertainment and waste recycling, and companions. By comparison, machines require no elaborate life-support systems, no entertainment, no companionship, and we do not yet feel any strong ethical prohibitions against sending machines on one-way, or suicide, missions.
Certainly, for simple missions, machines have proved themselves many times over. Unmanned vehicles have performed the first photography of the whole Earth and of the far side of the Moon; the first landings on the Moon, Mars and Venus; and the first thorough orbital reconnaissance of another planet, in the Mariner 9 and Viking missions to Mars. Here on Earth it is increasingly common for high-technology manufacturing-for example, chemical and pharmaceutical plants-to be performed largely or entirely under computer control. In all these activities machines are able, to some extent, to sense errors, to correct mistakes, to alert human controllers some great distance away about perceived problems.
The powerful abilities of computing machines to do arithmetic-hundreds of millions of times faster than unaided human beings-are legendary. But what about really difficult matters? Can machines in any sense think through a new problem? Can they make discussions of the branched-contingency tree variety which we think of as characteristically human? (That is, I ask Question 1; if the answer is A, I ask Question 2; but if the answer is B, I ask Question 3; and so on.) Some decades ago the English mathematician A. M. Turing described what would be necessary for him to believe in machine intelligence. The condition was simply that he could be in teletype communication with a machine and be unable to tell that it was not a human being. Turing imagined a conversation between a man and a machine of the following quality:
INTERROGATOR: In the first line of your sonnet which reads “Shall I compare thee to a Summer’s day,” would not “a Spring day” do as well or better?
WITNESS: It wouldn’t scan.
INTERROGATOR: How about “a Winter’s day”? That would scan all right.
WITNESS: Yes, but nobody wants to be compared to a Winter’s day.
INTERROGATOR: Would you say Mr. Pickwick reminded you of Christmas?
WITNESS: In a way.
INTERROGATOR: Yet Christmas is a Winter’s day, and I do not think Mr. Pickwick would mind the comparison.
WITNESS: I don’t think you’re serious. By a Winter’s day one means a typical Winter’s day, rather than a special one like Christmas.
No device of this sophistication has yet been built, although I am not sure how many humans would pass Turing’s human test. But the amount of effort and money put into artificial intelligence has been quite limited, and there are only about a half-dozen major centers of such activity in the world. One of the more striking results obtained in a very limited universe of discourse-that of children’s blocks-has come from the work of Terry Winograd, then at the Massachusetts Institute of Technology. Here is a dialogue between man and machine, the machine having been programmed by Winograd:
Q: What does the box contain?
A: The blue pyramid and the blue box.
Q: How many blocks are not in the box?
A: Four of them.
Q: Is at least one of them narrower than the one which I told you to pick up?
A: Yes, the red cube.
Q: Is there a steeple?
A: Sorry, I don’t know the word “steeple.”
Q: A steeple is a stack which contains two green cubes and a pyramid, for example.
A: I understand.
Q: Are there any steeples now?
A: No.
Q: Build one.
The machine does. It reaches out its mechanical hand, moves the cubes and builds the structure that has just been described.
Q: When did you pick up the pyramid?
A: While I was stacking up the red cube, the large red block and the large green cube.
Q: Why?
A: To get rid of it.
Q: Why did you clear off that cube?
A: To put it on the large green cube.
As another example, a machine psychiatrist has been developed by Joseph Weizenbaum, also at MIT. This is a much simpler program than Winograd’s, since it involves nondirective psychotherapy, which, generally speaking, requires extremely simple-minded computer programs. Here is a striking dialogue between a human being and a computer psychotherapist (There is certainly a selection effect here. Many other such machine/human psychiatric dialogues are not nearly so striking. But I suppose most human/human psychiatric dialogues are not very striking either.) In this interaction, in order to spell out clearly the respective roles of therapist and patient, the computer therapist types upper case while the patient types lower case:
men are all alike
IN WHAT WAY
they’re always bugging us about something or other
CAN YOU THINK OF A SPECIFIC EXAMPLE
well, my boy friend made me come here
YOUR BOY FRIEND MADE YOU COME HERE
he noticed that i’m depressed much of the time
I AM SORRY TO HEAR YOU ARE DEPRESSED
he’s right. I am unhappy
DO YOU THINK COMING HERE WILL HELP YOU NOT TO BE UNHAPPY
i need some help, that much is clear