Opinion
Architecture and Hardware Last byte

RISC Management

ACM A.M. Turing award recipients John Hennessy and David Patterson have introduced generations of students to reduced instruction set computing.
Posted
  1. Article
  2. Author
John L. Hennessy and David A. Patterson

At a time when “making an impact” can feel like a vague or even overwhelming prospect, it’s worth reviewing the accomplishments of two scientists who have done just that: ACM A.M. Turing Award recipients John Hennessy and David Patterson. What began as a simple-sounding insight—that you could improve microprocessor performance by including only instructions that are actually used—blossomed into a paradigm shift as the two honed their ideas in the MIPS (Microprocessor without Interlocked Pipeline Stages) and RISC (Reduced Instruction Set Computer) processors, respectively. A subsequent textbook, Computer Architecture: A Quantitative Approach, introduced generations of students not just to that particular architecture, but to critical principles that continue to guide designers as they balance constraints and strive for maximum efficiency.

David, you began working on what became the RISC architecture after a leave of absence at Digital Equipment Corporation (DEC).

DAVID PATTERSON: My sabbatical at DEC focused on reducing microprogramming bugs. At the time, people were designing microprocessors to imitate minicomputers like the ones in DEC’s popular VAX family, but VAX had an extremely complicated instruction set. When I got back to Berkeley, I wrote a paper arguing that if you put that into silicon, you need to make a chip that has a repair mechanism, because of all the bugs.

That paper was rejected.

uf1.jpg
Figure. Patterson (left) and Hennessy on the campus of Stanford University.

DAVID: It’s a dumb way to design microprocessors if you have to repair microcode bugs. That got me thinking about how to build something simpler that made more sense for microprocessors.

So during a series of graduate courses that began in 1980, you began developing a fast, lean microprocessor that included only instructions that were actually used. How did you meet John?

DAVID: There was a (U.S. Defense Advanced Research Projects Agency) DARPA meeting at the (University of California,) Berkeley campus in May of 1980, and I was presenting some of our early results. And there was this young “graduate student” from Stanford (University)—do you remember what you were presenting then, John?

JOHN HENNESSY: It was probably the microcode compiler that I wrote that got used for (James H.) Clark’s Geometry Engine project.

John, you went back to Stanford and, in 1981, started working on your own Reduced instruction Set microprocessor, which you eventually commercialized as the Microprocessor without interlocked Pipeline Stages, or MiPS. However, the ideas behind RiSC and MiPS were controversial at first.

JOHN: People said, “This is fine for an academic project, but you’ll never get these ideas to work in real machines that people want to sell to customers.”

uf2.jpg
Figure. John L. Hennessy

What were the most common objections?

JOHN: We built academic prototypes, and they didn’t have everything you’d need for a commercial machine. And people said, “When you put in virtual memory support, or support for floating point, all the advantages you have will disappear.”

DAVID: The conventional wisdom at the time was software was buggy because the vocabulary of the computers it talked to was too low. So there were all these efforts to try and make the vocabulary closer to that of the languages. John and I were arguing the opposite, and that was part of the heresy.

JOHN: I think the other thing that undermined it was that we didn’t have a good scientific explanation of what was happening until later. That was part of our motivation to write the book, so that we could explain the ideas quantitatively and scientifically.

You’re referring to Computer Architecture: A Quantitative Approach, which was first published in 1989 and is now in its sixth edition.

DAVID: I think a lot of faculty are unhappy with the textbooks they use. John and I talked about it at meetings, and when I realized I was going to be the next chair of the computer science division at Berkeley, we decided to go for it, because that gave us a deadline.

You handed off chapters to one another via FedExed floppy disks.

DAVID: We prototyped the book as if it were a computer, with an alpha and a beta version.

Microsoft’s David Cutler bought copies for all members of his design team.

DAVID: I think Microsoft also kept a copy in the stationery store. Pads of paper, pencils, Hennessy and Patterson…

JOHN: In the first year, we probably sold as many books to practicing engineers as we did to the academics. That’s really unusual. Books usually divide one way or the other—either they’re written for professionals or for the university market.

DAVID: The book made those ideas accessible to lots of people.

You also invented a brand new parameterized architecture for the book called DLX that expressed your approach.

DAVID: There are lots of RISC instruction sets, and John is associated with one and I’m associated with another. From a textbook writer’s perspective, we thought that picking one might flavor the book, so we decided to invent a brand-new instruction set.

MIPS lost out to Intel in the PC market, but RISC processors now power nearly all smartphones and mobile devices.

JOHN: The key thing to understand is the efficiency issue. When Dave and I were developing RISC architectures, we had to find efficient ways to use the silicon that was available at the time to get the highest-performance machines. Today, you’re constrained by a whole set of different factors. Sometimes it’s a question of the silicon area, because you’ve got to be able to sell processors for a dollar. But the other big constraint is energy efficiency, because so many things are battery powered, or you can’t include a fan to cool them. So the RISC’s underlying efficiency has enabled it to catapult into this critical role.

DAVID: It’s kind of like “Back to The Future.” When they’re selling things for pennies, people care a lot about the number of transistors they use. At the very high end, the instruction set matters less because there are so many other things going on. At the low end, you even worry about how many registers you have. What’s nice about RISC is that it works fine at the high end, and it’s a big asset at the low end, and that’s why it’s been so successful.


“What’s nice about RISC is that it works fine at the high end, and it’s a big asset at the low end, and that’s why it’s been so successful.”


What excites you both in the field of computer architecture?

JOHN: As conventional uniprocessors stall out, there’s a focus now on what Dave and I have called domain-specific architectures—architectures that are designed for specific classes of problems. The obvious example now is deep neural networks; they use very specific computing strategies, and you can get an order of magnitude in efficiency by designing an architecture that does the kinds of functions that these machines need to do well.

uf3.jpg
Figure. David A. Patterson

DAVID: The reason that John and I are such good co-authors is that we have almost identical world views. Domain-specific architectures are in the newest chapter of our book, and for sure they’re an exciting development.

There are two other things I’m interested in. First is RISC-V, an open instruction set architecture that’s based on RISC principles. Not too many people get to work on proprietary instruction sets like ARM and x86, but everybody can get involved in the instruction set evolution of RISC-V. The other thing is getting better at security. So far, we haven’t asked much of computer hardware in security. I think architects need to step up and really help attack this problem. What’s exciting about the RISC-V is that you can download a full viable software stack, prototype your idea using an FPGA, stick it on the Internet, let people attack it, and see whether or not it works. The iteration loop can be days instead of the years it takes with proprietary instruction sets.

John, you’re also involved with the Knight-Hennessy Scholars Program, which aims to “build a multidisciplinary community of Stanford graduate students dedicated to finding creative solutions to the world’s greatest challenges.”

JOHN: We just admitted our first class coming this fall. We have 49 students from 35 different countries. They have already accomplished amazing things, and they represent every school in the university, from law and business to engineering and the social sciences. Given the gigantic leadership problems that we have around the world, hopefully we can get a new generation of young people out there that are determined to do better.

John, you’re also writing a book about leadership.

JOHN: It is at the publisher now. It’s a short book—it’s meant to be readable on a cross-country flight. It’s about my experiences with a variety of leadership issues and what I learned about it. The first chapter is humility.

Anything else?

DAVE: With the ending of Dennard Scaling and Moore’s Law, we must change the instruction set for major gains in cost, performance, energy, or security. Freeing architects from the chains of proprietary instruction sets may spur innovation like in the 1980s. Hence the title of our Turing Lecture: “A New Golden Age for Computer Architecture: Domain Specific Hardware/Software Co-Design, Enhanced Security, Open Instruction Sets, and Agile Chip Development.”

Back to Top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More