Opinion
Architecture and Hardware Last byte

Shaping the Foundations of Programming Languages

ACM A.M. Turing Award recipients Alfred Aho and Jeffrey Ullman discuss their early work, the 'Dragon Book,' and the future of 'live' computer science education.
Posted
  1. Article
  2. Author
Jeffrey Ullman and Alfred Aho conversing in Nokia Bell Labs hallway

ACM Turing award recipients Alfred Aho and Jeffrey Ullman met serendipitously, in the registration line for Princeton University’s Ph.D. program. After graduate school, both joined the newly established Computing Science Research Center at Bell Laboratories, and their friendship turned into a productive collaboration that shaped the foundations of programming language theory and implementation. Here, they talk to us about languages, compilers, and the future of CS education.

You joined Bell Labs in 1967, during what many consider its heyday. What was it like?

AHO: Bell Labs was a researcher’s nirvana. When my boss hired me, he said, “Why don’t you work on what you think is important?” And we were surrounded by all these brilliant colleagues—people like Ken Thompson and Dennis Ritchie. People had the tradition of keeping their office doors open and welcoming strangers with questions rather than shooing them out.

ULLMAN: My understanding is that, by law, AT&T had to put 1% of its revenue into research. I don’t think there’s any precedent for anything like that.

AHO: It was definitely the heyday of long-term unfettered research. And the results demonstrate the viability of that model. In my first 10 years at Bell Labs, I was able to write 40 papers and five books, almost all of them co-authored, because the environment was so rich with problems.

uf1.jpg
Figure. Elements of the 1960s Unix Room have been preserved at Nokia Bell Labs. Ullman and Aho sit on the couch from Dennis Ritchie’s office where many discussions on what would become the Unix operating system were rooted. Ritchie, with Ken Thompson, received the 1983 ACM A.M. Turing Award.

What about your collaborations with each other? Jeff, you’ve called your result on two-way deterministic classes of automata the work you’re most proud of from that period.

ULLMAN: We have to understand it’s of no consequence whatsoever. A finite automaton—something like a lexical analyzer—is ordinarily designed to chew up its input in one direction; it reads stuff and never goes back. But there was this question of what happens if you allow it to move back and forth on its tape, and we were able to show that that doesn’t add to the power. It’s a very hard proof, or at least, it was at the time. But I enjoyed the work, and I think Al did, too.

AHO: Jeff would walk into my office and say, “I’ve got the proof now.” And I would listen to it and say, “What about this step?” An hour or two later, I would pop back into his office to say, “We could get around that this way.” And he’d expose a flaw in my reasoning. We kept this up for a week. And at the end of the week, we didn’t see a flaw in one another’s arguments. So we proved what was, as Jeff mentioned, an inconsequential theorem, but on the other hand, it was a fun proof and it fit into the kind of language theory that was being done at the time.

Eventually, you drew on your work in language theory to make groundbreaking contributions to the field of compiler design.

AHO: At the beginning, we launched a systematic study of the theory of parsing translation and compiling—what models there were, what algorithms, and what needed to be extended. This resulted in a two-volume book that we wrote in the early 1970s, The Theory of Parsing Translation and Compiling. Then, as we learned more about compilers and compiling, and as we worked with some of our colleagues to develop components for creating lexical analysis tools, like Lex, and syntax analysis tools, like yacc, we codified what we knew into another book, called Principles of Compiler Design. And Jeff had this brilliant idea of putting a dragon on the front cover with the complexity of compiler design tattooed on its chest.

This is the text that eventually became known as the Dragon Book. I was going to ask whose idea that was.

AHO: The book became widely adopted, and the field grew rapidly. We had to write subsequent books to try to keep up. The color of the dragon changed, and we added a coauthor, Ravi Sethi, and then another, Monica Lam. But in the period of 1977 to 2007, we wrote this collection of books and did research that covered the sweep of programming language translators, and in the process created tools that our colleagues could use to create domain-specific languages and harness theory without having to become theoreticians. Don Knuth once said that the best theory is motivated by practice and the best practice by theory. That fits our work quite nicely.

Jeff, you’ve also thought a lot about the relationship between theory and practice.

ULLMAN: Different fields have different ways of looking at it. Parsing yields amazingly well to theory. At the time of the first Fortran compiler, it took several person-years to write a parser. By the time yacc came around, you could do it in an afternoon. So the compiler programming languages community has always had a positive view of theory. In the database field, where I spent my subsequent career, it was initially quite a struggle to get researchers to accept the idea that theory has some impact. That’s since changed.

What about the relationship between industry and academia?

AHO: I was always interested in teaching. There was a Turing laureate at Bell Labs by the name of Richard Hamming, and he had an opinion on everything. In my first week, he told me that if you want to be a great scientist, you have to do more than great work; you have to teach others how to use your work. I found this combination of doing research, writing books, and teaching a very fruitful combination.

ULLMAN: My original plan was to become an academic, but in those days, assistant professors were the scum of the Earth—all of the hard work and none of the power or opportunities. It’s very different today. Back then, I figured I could go work in industrial research for a while first, and it certainly paid off.

Speaking of academia, Jeff, I wonder what you make of the future of traditional “live” education, which you’ve been thinking about since at least 2001, when you proposed the idea of creating a company that acts as an ASP (application service provider) for computer science education, providing centralized lectures supported by local in-person instruction and a 24/7 helpline.

ULLMAN: I should have been more aggressive about building a company like Coursera around that idea. But the thing that I got hung up on, and that has never really come to fruition, is this idea of a universal TA system. If you’re having problems with your PC, you can call tech support. Sometimes it’s useful help, sometimes not, but there’s a system for fixing problems. I never understood why it wasn’t feasible to do the same thing for assistance with homework. But it’s hard to have a global system where students from 1,000 different courses, even on the same subject, can ask about their homework, unless instructors are aligned in what questions they ask.

Your online course on automata got great reviews.

ULLMAN: The online courses are helpful. People are still using them. Maybe one day, these course videos will replace textbooks, because there’s no question that the demand for texts and courses requiring texts is just gone.

AHO: On the subject of education, I feel that what you teach is perhaps more important than how you teach it. I’m a big fan of finding the appropriate abstractions for talking about a problem area and then finding solutions in terms of those abstractions. It’s wonderful to be in an area like computer science because, as we expand its reach, we encounter problems for which we don’t even have the appropriate abstractions to be able to think about them.

When you imagine the future of the field, what areas do you think hold the most promise?

AHO: That is a great question. Particularly with fields like AI, we’re starting to replace people who do routine cognitive jobs with computer programs. What will the job market of the future be with this increasing capacity and power of computing? And as a consequence, what should we be educating students in so they’ll be employable in the future? One of the feelings that I have is that the basics of the field don’t go out of style nearly as fast as the details of the technology.

ULLMAN: As technology improves, we are able to deal with larger and larger datasets. So far, that’s opened up more and more opportunities for solving problems. There are problems, like driverless cars, that yield to an abundance of data that just don’t yield to small amounts. Throughout my life, we’ve never hit a limit, so when we can deal with 10 times more data than is currently possible, there will probably be other things we can do that I can’t even imagine.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More