Computing perspectives: the rise of the VLSI processor
By Maurice V. Wilkes
Communications of the ACM,
Vol. 33 No. 12, Pages 16-ff.
Around 1970 Intel discovered it could put 2,000 transistors—or perhaps a few more—on a single NMOS chip. In retrospect, this may be said to mark the beginning of very large-scale integration (VLSI), an event which had been long heralded, but had been seemingly slow to come. At the time, it went almost unnoticed in the computer industry. This was partly because 2,000 transistors fell far short of what was needed to put a processor on a chip, but also because the industry was busy exploiting medium-scale integration (MSI) in the logic family known as TTL. Based on bipolar transistors, and a wide range of parts containing a few logical elements—typically two flip-flops or up to 16 gates in various combinations—TTL was highly successful. It was fast and versatile, and established new standards for cost effectiveness and reliability. Indeed, in an improved form and with better process technology, TTL is still widely used. In 1970, NMOS seemed a step backward as far as speed was concerned.Intel did, however, find a customer for its new process; it was a company that was interested in a pocket calculator chip. Intel was able to show that a programmable device would be preferable on economic grounds to a special-purpose device. The outcome was the chip that was later put on the market as the Intel 4004. Steady progress continued, and led to further developments: In April 1972 came the Intel 8008 which comprised 3,300 transistors, and then in April 1974 came the 8080 which had 4,500 transistors. The 8080 was the basis of the Altar 8800 which some people regard as the ancestor of the modern personal computer. It was offered in the form of a kit in January 1975. Other semiconductor manufacturers then entered the field: Motorola introduced the 6800 and MOS Technology Inc. introduced the 6502.Microcomputers had great success in the personal computer market which grew up alongside the older industry, but was largely disconnected from it. Minicomputers were based on TTL and were faster than microcomputers. With instruction sets of their own design and with proprietary software, manufacturers of minicomputers felt secure in their well-established markets. It was not until the mid-1980s that they began to come to terms with the idea that one day they might find themselves basing some of their products on microprocessors taken from the catalogs of semiconductor manufacturers, over whose instruction sets they had no control. They were even less prepared for the idea that personal computers, in an enhanced form known as workstations, would eventually come to challenge the traditional minicomputer. This is what has happened—a minicomputer has become nothing more than a workstation in a larger box and provided with a wider range of peripheral and communication equipment.As time has passed, the number of CMOS transistors that can be put on a single chip has increased steadily and dramatically. While this has been primarily because improvements in process technology have enabled semiconductor manufacturers to make the transistors smaller, it has also been helped by the fact that chips have tended to become larger. It is a consequence of the laws of physics that scaling the transistors down in size makes them operate faster. As a result, processors have steadily increased in speed. It would not have been possible, however, to take full advantage of faster transistors if the increase in the number that could be put on a chip had not led to a reduction in the total number of chips required. This is because of the importance of signal propagation time and the need to reduce it as the transistors become faster. It takes much less time to send a signal from one part of the chip to another part than it does to send a signal from one chip to another.The progress that has been made during the last three or four years is well illustrated by comparing the MIPS R2000 processor developed in 1986 with two-micron technology, with the Intel i860 developed in 1989. The former is based on a RISC processor which takes up about half the available space. This would not have left enough space for more than a very small amount of cache memory. Instead the designer included the cache control circuits for off-chip instruction and data caches. The remaining space, amounting to about one-third of the whole was put to good use to accommodate a Memory Management Unit (MMU) with a Translation Look Aside Buffer (TLB) of generous proportions. At this stage in the evolution of processor design, the importance of RISC philosophy in making the processor as small as it was will be appreciated. A processor of the same power designed along pre-RISC lines would have taken up the entire chip, leaving no space for anything else.When the Intel i860 processor was developed three years later, it had become possible to accommodate on the chip, not only the units mentioned above, but also two caches—one for data and one for instructions—and a highly parallel floating point coprocessor. This was possible because the silicon area was greater by a factor of slightly more than 2, and the amount of space occupied by a transistor less by a factor of 2.5. This gave a five-fold effective increase in the space available. The space occupied by the basic RISC processor itself is only 10% of the whole as compared with 50% on the R2000. About 35% is used for the floating point coprocessor and 20% for the memory management and bus control. This left about 35% to be used for cache memory.There are about one million transistors on the i860—that is 10 times as many as on the R2000, not 5 times as many as the above figures would imply. This is because much of the additional space is used for memory, and memory is very dense in transistors. When still more equivalent space on the silicon becomes available, designers who are primarily interested in high-speed operation will probably use the greater part of it for more memory, perhaps even providing two levels of cache on the chip. CMOS microprocessors have now pushed up to what used to be regarded as the top end of the minicomputer range and will no doubt go further as the transistor size is further reduced.Bipolar transistors have followed CMOS transistors in becoming smaller, although there has been a lag. This is mainly because of the intrinsically more complex nature of the bipolar process; but it is also partly because the great success of CMOS technology has led the semiconductor industry to concentrate its resources on it. Bipolar technology will always suffer from the handicap that it takes twice as many transistors to make a gate as it does in CMOS.The time to send a signal from one place to another depends on the amount of power available to charge the capacitance of the interconnecting wires. This capacitance is much greater for inter-chip wiring than for on-chip wiring. In the case of CMOS, which is very low-power technology, it is difficult to provide enough power to drive inter-chip wiring at a high speed. The premium placed on putting everything on the same chip is, therefore, very great.Much more power is available with bipolar circuits and the premium is not nearly so great. For this reason it has been possible to build multi-chip processors using gate arrays that take full advantage of the increasingly high speed of available bipolar technology. It is presently the case that all very fast computers on the market use multi-chip bipolar processors.Nevertheless, as switching speeds have become higher it has become necessary to develop interconnect systems that are faster than traditional printed circuit boards. It is becoming more and more difficult to do this as switching speeds continue to increase. In consequence, bipolar technology is approaching the point—reached earlier with CMOS—when further advance requires that all those units of a processor that need to communicate at high speed shall be on the same chip. Fortunately, we are in sight of achieving this. It will soon be possible to implement, in custom bipolar technology on a single chip, a processor similar to the R2000.Such a processor may be expected to show a spectacular increase of speed compared with multi-chip implementations based on similar technology, but using gate arrays. However, as it becomes possible to put even more transistors on a single chip, it may be that the balance of advantage will lie with CMOS. This is because it takes at least four times as many transistors to implement a memory cell in bipolar as it does in CMOS. Since any processor, especially a CMOS processor, gains greatly in performance by having a large amount of on-chip memory, this advantage could well tip the balance in favor of CMOS.The advantage that would result from being able to put CMOS transistors and bipolar transistors on the same chip has not gone unnoticed in the industry. Active development is proceeding in this area, under the generic name BiCMOS. BiCMOS is also of interest for analogue integrated circuits.If the BiCMOS process were optimized for bipolar transistors it would be possible to have a very high-performance bipolar processor with CMOS on-chip memory. If the bipolar transistors were of lower-performance levels they would still be of value for driving off-chip connections and also for driving long-distance connections on the chip itself.A pure bipolar chip, with a million transistors on it, will dissipate at least 50 watts, probably a good deal more. Removing the heat presents problems, but these are far from being insuperable. More severe problems are encountered in supplying the power to the chip and distributing it without a serious voltage drop or without incurring unwanted coupling. Design tools to help with these problems are lacking. A BiCMOS chip of similar size will dissipate much less power. On the other hand, BiCMOS will undoubtedly bring a spate of problems of its own, particularly as the noise characteristics of CMOS and bipolar circuits are very different.CMOS, bipolar, and BiCMOS technologies are all in a fluid state of evolution. It is possible to make projections about what may happen in the short term, but what will happen in the long term can only be a matter of guess work. Moreover, designing a computer is an exercise in system design and the overall performance depends on the statistical properties of programs as much as on the performance of the individual components. It would be a bold person who would attempt any firm predictions.And then, finally, there is the challenge of gallium arsenide. A colleague, with whom I recently corresponded, put it very well when he described gallium arsenide as the Wankel engine of the semiconductor industry!
The full text of this article is premium content
No entries found
Log in to Read the Full Article
Purchase the Article
Create a Web Account
If you are an ACM member, Communications subscriber, Digital Library subscriber, or use your institution's subscription, please set up a web account to access premium content and site
features. If you are a SIG member or member of the general public, you may set up a web account to comment on free articles and sign up for email alerts.