Monday, May. 03, 1982
The Mind in the Machine
By Roger Rosenblatt
The factory robot that crushed a man to death in Japan last year did little to silence the talk that machines are a threat to human preeminence. That talk has been alive ever since people first discovered that they could manufacture tools vastly superior to themselves; in Samuel Butler's satire Erewhon (1872), the citizens establish a museum of old machines in which they at once deposit and abandon their mechanical inventions, which they believed would swallow up their souls. When machines possess artificial intelligence, like computers, the human fear of being overtaken seems both more urgent and more complex. Science-fiction writers from Capek to Asimov have built much of their genre around robots, androids, computers and their kin--each fairly boring and predictable as characters, but all presenting the same basic paradox: that they will eventually take command of the world, and that a man can beat them every time.
This proposition is not as contradictory as it sounds, since, for one thing, the fact that it takes real (i.e., natural) intelligence to create the artificial kind would seem to give humanity a perpetual edge; and, for another, people are able to hold contraries in their heads simultaneously--a feat that is reassuring in itself, since it is one of the things that distinguishes men from machines. Indeed, man tends to search out such distinctions even as he endows his machines with increasing sophistication, as if to make sure that his evolution is no slower than that of his creations. Still, it is sometimes nerve-racking to read what these boxes can do. Joseph Weizenbaum, of M.I.T., who has recently come to criticize computer education for children as a context for "toy problems," nonetheless got more than a toy gasp out of the public in 1968 when he unveiled his project ELIZA--an IBM 7094 programmed by Weizenbaum to "practice" psychotherapy--and then published a "conversation" between ELIZA and a real patient:
Patient: You are like my father in some ways.
ELIZA: What resemblance do you see?
Patient: You are not very aggressive, but I think you don't want me to notice that.
ELIZA: What makes you think I am not very aggressive?
Patient: You don't argue with me.
ELIZA: Why do you think I don't argue with you?
Patient: You are afraid of me.
ELIZA: Does it please you to believe I am afraid of you?
The elements of parody notwithstanding, it is no joke to see a machine become a man, especially in an era whose main (if somewhat pro forma) worry centers on men becoming machines.
A great deal of intellectual effort is therefore spent these days--mostly by the computer scientists themselves--trying to reassure everybody that, as smart as a machine can get, it can never be as intelligent as its progenitor. In part, this effort is made in order to see that the wizened, noncomputer generation--which often regards the younger with the unbridled enthusiasm that the Chinese showed the Mongol hordes--feels that it has a safe and legitimate place in modernity. In part, the effort is made because the proposition is true: a computer cannot possess the full range of human intelligence. Yet, in terms of reconciling man and machine, this effort still misses the point. The cultural value of computers does not lie in perceiving what they cannot do, but what they can, and what, in turn, their capabilities show about our own. In other words, a computer may not display the whole of human intelligence, but that portion it can display could do a lot more good for man's self-confidence than continuing reassurances that he is in no immediate danger of death by robot.
Essentially, what one wants to know in sorting out this relationship is the answers to two questions: Can computers think (a technical problem)? And, should they think (a moral one)? In order to get at both, it is first necessary to agree on what thinking itself is--what thought means--and that is no quick step. Every period in history has had to deal with at least two main definitions of thought, which mirror the prevailing philosophies of that particular time and are usually in opposition. Moreover, these contending schools change from age to age. On a philosophical level, thought cannot know itself because it cannot step outside itself. Nor is it an activity that can be understood by what it produces (art, science, dreams). To Freud the mind was a house; to Plato a cave. These are fascinating, workable metaphors, but the fact is that in each case an analogy had to be substituted for an equation.
At the same time, certain aspects of thinking can be identified without encompassing the entire process. The ability to comprehend, to conceptualize, to organize and reorganize, to manipulate, to adjust--these are all parts of thought. So are the acts of pondering, rationalizing, worrying, brooding, theorizing, contemplating, criticizing. One thinks when one imagines, hopes, loves, doubts, fantasizes, vacillates, regrets. To experience greed, pride, joy, spite, amusement, shame, suspicion, envy, grief--all these require thought; as do the decisions to take command, or umbrage; to feel loyalty or inhibitions; to ponder ethics, self-sacrifice, cowardice, ambition. So vast is the mind's business that even as one makes such a list, its inadequacy is self-evident--the recognition of inadequacy being but another part of an enormous and varied instrument. .
The answer to the first question, then--Can a machine think?--is yes and no. A computer can certainly do some of the above. It can (or will soon be able to) transmit and receive messages, "read" typescript, recognize voices, shapes and patterns, retain facts, send reminders, "talk" or mimic speech, adjust, correct, strategize, make decisions, translate languages. And; of course, it can calculate, that being its specialty. Yet there are hundreds of kinds of thinking that computers cannot come close to. And for those merely intent on regarding the relationship of man to machine as a head-to-artificial-head competition, this fact offers some solace--if not much progress.
For example, the Apollo moon shot in July 1969 relied on computers at practically every stage of the operation. Before taking off, the astronauts used computerized simulations of the flight. The spacecraft was guided by a computer, which stored information about the gravitational fields of the sun and moon, and calculated the craft's position, speed and altitude. This computer, which determined the engines to be fired, and when, and for how long, took part of its own information from another computer on the ground. As the Apollo neared the moon, a computer triggered the firing of a descent rocket, slowed the lunar module, and then signaled Neil Armstrong that he had five seconds to decide whether or not to go ahead with the landing. At 7,200 ft., a computer commanded the jets to tilt the craft almost upright so that Armstrong and Aldrin could take a close look at what the world had been seeking for centuries.
Would one say, then, that computers got men to the moon? Of course not. A machine is merely a means. What got man to the moon was his desire to go there--desire being yet another of those elements that a computer cannot simulate or experience. It was far less interesting, for instance, that Archimedes believed he could move the earth with his lever than that he wanted to try it. Similarly, no machine could have propelled man to the moon had not the moon been in man in the first place.
Thus the second question--Should a machine think?--answers itself. The question is not in fact the moral problem it at first appears, but purely a practical one. Yes, a machine should think as much as it can, because it can only think in limited terms. Hubert Dreyfus, a philosophy professor at Berkeley, observes that "all aspects of human thought, including nonformal aspects like moods, sensory-motor skills and long-range self-interpretations, are so interrelated that one cannot substitute an abstractable web of explicit beliefs for the whole cloth of our concrete everyday practice." Marianne Moore saw the web her own way: "The mind is an enchanting thing,/ is an enchanted thing/ like the glaze on a/ katydid-wing/ subdivided by sun/ till the nettings are legion,/ Like Gieseking playing Scarlatti." In short, human intelligence is too intricate to be replicated. When a computer can smile at an enemy, cheat at cards and pray in church all in the same day, then, perhaps, man will know his like. Until then, no machine can touch us.
For the sake of argument, however, what if Dreyfus, Moore and common sense were all wrong? What if the mind with its legion nettings could in fact be replicated in steel and plastic, and all human befuddlements find their way onto a program--would the battle be lost? Hardly. The moon is always in the man. Even if it were possible to reduce people to box size and have them plonked down before themselves in all their powers, they would still want more. Whatever its source, there is a desire that outdesires desire; otherwise computers would not have come into being. As fast as the mind travels, it somehow manages to travel faster than itself, and people always know, or sense, what they do not know. No machine does that. A computer can achieve what it does not know (not knowing that 2+2=4, it can find out), but it cannot yearn for the answer or even dimly suspect its existence. If people knew where such suspicions and yearnings came from, they might be able to lock them in silicon. But they do not know what they are; they merely know that they are--just as in the long run they only know that they exist, not what their existence is or what it means. The difference between us and any machine we create is that a machine is an answer, and we are a question.
But is there anything really startling in this? With all the shouting and sweating that go on about machines taking over the world, does anyone but a handful of zealots and hysterics seriously believe that the human mind is genuinely imperiled by devices of its own manufacture? In Godel, Escher, Bach (1979), Douglas R. Hofstadter's dazzling book on minds and machines, a man is described--one Johann Martin Zacharias Dase (1824-61)--who was employed by governments because he could do mathematical feats like multiplying two 100-digit numbers in his head, and could calculate at a glance how many sheep were in a field, for example, or how many words in a sentence, up to about 30. (Most people can do this up to about six.) Were Mr. Dase living today, would he be thought a computer? Are computers thought of as men? This is a kind of cultural game people play, a false alarm, a ghost story recited to put one's mind at rest.
The trouble is that "at rest" is a poor place to be in this situation, because such a position encourages no understanding of what these machines can do for life beyond the tricks they perform. Alfred North Whitehead said that "civilization advances by extending the number of important operations which we can perform without thinking about them." In that sense, computers have advanced civilization. But thinking about the computer, as a cultural event or instrument, has so far not advanced civilization about whit. Instead, one hears humanists either fretting about the probability that before the end of the century computers will be able to beat all the world's chess masters, or consoling themselves that a computer cannot be Mozart--the response to the first being, "So what?" and to the second, "Who ever thought so?" The thing to recognize about the computer is not how powerful of is or will become, but that its power is finite. So is that of the mind. The finitudes in both cases are not the same, but the fact that they are comparable may be the most useful news that man's self-evaluation has received in 200 years.
For too long now, generations have been bedeviled with the idea, formally called romanticism, that human knowledge has no limits, that man can become either God or Satan, depending on his inclinations. The rider to this proposition is that some human minds are more limitless than others, and wherever that notion finds its most eager receptacles, one starts out with Byron and winds up in Dachau. To be fair, that is not all of romanticism, but it is the worst of it, and the worst has done the world a good deal of damage. For the 18th century, man was man-size. For the 19th and 20th, his size has been boundless, which has meant that he has had little sense of his own proportion in relation to everything else--resulting either in exaggerated self-pity or in self-exaltation--and practically no stable appreciation of his own worth.
Now, suddenly, comes a machine that says in effect: This is the size of man insofar as that size may be measured by this machine. It is not the whole size of man, but it is a definable percentage. Other machines show you how fast you can move and how much you can lift. This one shows you how well you can think, in certain areas. It will do as much as you would have it do, so it will demonstrate the extent of your capabilities. But since it can only go as far as you wish it to go, it will also demonstrate the strength of your volition.
Both these functions are statements of limitation. A machine that tells you how much you can know likewise implies how much you cannot. To learn what one can know is important, but to learn what one cannot know is essential to one's wellbeing. This offers a sense of proportion, and so is thoroughly antiromantic. Yet it is not cold 18th century rationalistic either. The computer simply provides a way of drawing a line between the knowable and the unknowable, between the moon and the moon in man, and it is on that line where people may be able to see their actual size.
Whether the world will look any better for such self-recognition is anybody's guess. The mind, being an enchanted thing, has surprised itself too often to suggest that any discovery about itself will improve economies or governments, much less human nature. On face value, however, the cultural effects of these machines are promising. Every so often in history man makes what he needs. In one sense he made the computer because he needed to think faster. In another, he may have needed to define himself more clearly; he may have sensed a need for intellectual humility. If he needed intellectual humility at this particular time, it may be a sign that he was about to get out of hand again, and so these contraptions, of which he pretends to be so fearful, may in fact be his self-concocted saving grace. The mind is both crafty and mysterious enough to work that way.
--By Roger Rosenblatt
This file is automatically generated by a robot program, so viewer discretion is required.