Monday, Jun. 09, 1986

Letting 1,000 Flowers Bloom

By Philip Elmer-DeWitt

With its black-tinted panels and pulsing red indicator lights, it bears a striking resemblance to Joshua, the fictional computer that plays chess and thermonuclear war in the movie WarGames. But this is the real thing. Inside a 5-ft. Lexon plastic cube is a powerful new computer called the Connection Machine, which not only looks different from most mainframes, it is different.

In its recent public debut, the first production model made short work of a series of knotty problems that would have tied up an ordinary computer for hours. It scanned three months of Reuters news stories--16,000 articles in all --in / of a second. In two seconds, it transformed a stereoscopic image transmitted by a pair of television cameras into a detailed, two-dimensional contour map. In three minutes, it laid out the circuitry for a computer chip containing 4,000 transistors. Says Daniel Hillis, the computer's 29-year-old designer and co-founder of Thinking Machines Corp. of Cambridge, Mass.: "The conventional computer is to the Connection Machine what the bicycle is to a supersonic jet."

Hillis may be using hyperbole. But if the initial kinks can be worked out, his strange new machine will be capable of operating at speeds in excess of 1 billion instructions a second--roughly the power of a Cray X-MP supercomputer but at a quarter the cost. Moreover, the Connection Machine offers the hope of solving problems in machine vision and artificial intelligence for which today's supercomputers are woefully ill equipped. Says Stephen Squires, a spokesman for the Defense Advanced Research Projects Agency (DARPA), the Defense Department bureau that put up $4.7 million for the computer's development: "It has surpassed our expectations."

How does the Connection Machine achieve its remarkable speed? The secret is in its extraordinary number of processors and a radical new architecture that gives it the flexibility to perform large numbers of calculations simultaneously.

Most computers built in the past 40 years were designed to do one thing at a time. Following the basic concept conceived by John von Neumann and his colleagues in 1945, they consist of a single, high-speed central processing unit connected to an array of memory cells. "The two-part architecture keeps the silicon devoted to processing wonderfully busy," says Hillis. "But this is only 2% or 3% of the silicon area. The other 97% (the memory bank) sits idle."

As computers have grown more capacious, this design has become increasingly inefficient. While it is easy to expand memory, it is hard to increase the capacity of the processor. As a result, giant machines are forced to draw their data through a single narrow passageway known by computer scientists as the Von Neumann bottleneck.

One way to widen that bottleneck is to add more processors. Over the past five years, dozens of computer designers have taken this approach, adding from several to a few hundred processing units and letting them share, in parallel, the task of handling data.

The Connection Machine tries to do away with the bottleneck by overwhelming it with processors, 65,536 of them. Acting in concert, they can handle massive amounts of data. Equally important, each processor is assigned its own tiny memory bank. This means that processing and memory, once separated by a narrow channel, are now integrated within a fingernail-size piece of silicon. Moreover, each processor is directly or indirectly connected to every other one through what is in effect a miniature telephone system with 4,096 switching stations and 24,576 trunk lines that can be programmed and reprogrammed without actually changing the computer's wiring.

These reprogrammable connections give the machine its name. For any particular task, the processors are electronically rearranged to suit the natural structure of the data. To simulate a computer component made up of 20,000 transistorized switches, for example, the machine would assign one processor to each switch. Then, rather than updating the state of those 20,000 switches one at a time, as in a traditional Von Neumann-type computer, the Connection Machine's software simply tells the 20,000 processors to update themselves all at once.

Unfortunately, programming such a machine calls for some conceptual gymnastics that even computer scientists find difficult to perform. According + to a DARPA report, only one in three De fense Department programmers can make the leap. Says Larry Smarr, director of the National Center for Supercomputer Applications at the University of Illinois: "We have 40 years' experience designing software for single-processor machines. But the software for these new machines is complicated and excruciatingly hard to write."

Still, the Connection Machine has some powerful supporters, among them M.I.T.'s Marvin Minsky, a pioneer in artificial-intelligence research, and Claude Shannon, the father of the statistical theory of information. By last week Hillis' company had taken orders for seven of its new computers, ranging in price from $1 million to $3 million: two each from M.I.T., Perkin-Elmer and DARPA, and one from Yale University.

Thinking Machines Corp. is not the only game in town. BBN, Intel and Floating Point Systems have shipped parallel-processing computers that are nearly as ambitious, and a pair of start-up companies, Encore and Sequent, are finding a ready market for more modest parallel machines. Meanwhile, research teams across the U.S. are experimenting with even more radical designs. Among them: AT&T Bell Laboratories computer circuits that mimic the action of the billions of neurons in the human brain. "It's a time for experimentation," says Illinois' Smarr. "There are 1,000 flowers blooming."

With reporting by J. Madeleine Nash/Chicago and William Sonzski/Cambridge