Opinion
Letters to the Editor

Microprocessor Architectures Follow Markets and Silicon

Posted
  1. Introduction
  2. Authors' Response
  3. Computer Science Not for All
  4. Author's Response
  5. Leverage Credentials to Limit Liability Premiums
  6. Work Still Worth Doing
  7. Corrections
  8. Footnotes
Letters to the Editor

The viewpoint "The Value of Microprocessor Designs" by Ana Aizcorbe et al. (Feb. 2013) aimed to analyze the value of microarchitectures in isolation, as though they could be mixed-and-matched with various silicon implementation technologies over the years. This is a nonsensical proposition. For example, a Pentium III microarchitecture could not have been realized in 0.8u technology due to insufficient transistors. A 486 microarchitecture could be instantiated on, say, 90nm technology but would have been too slow to be competitive. And the design trade-offs baked into the 486 pipeline, appropriate for the silicon of the 1980s, would not leverage the much larger transistor budgets of the 1990s and later.

These microarchitectures were not independent of one another, as Aizcorbe et al. implicitly assumed. The Pentium III was the same microarchitecture as the Pentium II but with SSE instructions added. Moreover, both came from the original Pentium Pro P6 microarchitecture. The Pentium-M was also a descendant of the P6. The Pentium 4 microarchitecture was substantially different, but, as chief architect of both P6 and Pentium 4, I can testify that the Pentium 4 was not unrelated to P6.

Aizcorbe et al. also said, "The Pentium 4 design provided very little value to Intel" due to "overheating." Incorrect. The Pentium 4 was designed to the limits of air-cooling but not beyond those limits. It did not overheat nor suffer heat-related reliability issues. Had the Pentium 4 been designed with a much different microarchitecture, Aizcorbe et al. suggested Intel might have earned much higher profits. What other microarchitecture? They shed no light on this question. Neither Aizcorbe et al. nor Intel can possibly know what might have happened had the Pentium 4 been designed another way.

They also missed another important real-world issue: fab capacity and opportunity cost. Chip manufacturers can always lower the price to make demand go up, but the bottom line might suffer. Our goal in the Pentium product line was to design chips that would command high prices but also generate high demand. And we did it in the face of a market that evolved dramatically from the 486 days, when only engineers and scientists could afford PCs, through the early 1990s, when home computers were becoming common, through the late 1990s, when the Internet took off.

Aizcorbe et al. did mention such influences but did not take them into account in drawing their conclusions. Each of these market exigencies influenced the design, as well as the final results, in special ways. Microarchitectures are intrinsically bound to their markets, and their implementation technologies and are not fungible per their assumptions.

Bottom line, Aizcorbe et al. observed that Intel did not earn the same profits on all its products over the years. Some, it turns out, paid off more than others. We can agree on that much. But saying so is not news.

Robert (Bob) Colwell, Portland, OR

Back to Top

Authors’ Response

We welcome Colwell’s comments but disagree on one major point: Our methodology did not mix-and-match different microarchitectures and silicon implementation technologies, as he suggests. We used only those combinations actually produced by Intel at a given point in time. However, our approach does hinge on the assumption that even when a new microarchitecture is available, Intel cannot retrofit all its current capacity to use it. As a result, old and new microarchitectures are used concurrently in production, allowing us to infer the incremental value of the new microarchitecture.

Ana Aizcorbe, Washington, D.C.
Samuel Kortum, New Haven, CT
Unni Pillai, Albany, NY

Back to Top

Computer Science Not for All

I am somewhat disturbed by ACM’s effort to increase the amount of computer science covered in a general education, as laid out in Vinton G. Cerf’s editorial "Computer Science Education–Revisited" (Aug. 2013). Not every student needs to train as a computer scientist. Few will ever enter the field, though all should have the opportunity. However, forcing them to learn some amount of computer science would be a mistake. They already have plenty to cover.

Computer science ultimately comes down to the various areas of mathematics, though many non-computer scientists use mathematics as well. Engineers need good command of calculus. Cryptographers need good command of discrete mathematics. Anyone who balances a checkbook must know addition and subtraction.

I would be more comfortable if ACM advocated using its resources and the Web to share its knowledge, offer online courses, and provide a social media presence so motivated students of computer science could gather and share their experiences.

Perhaps ACM should aim to establish a "Future Computer Scientists" program, coaxed, not forced.

Wynn A. Rostek, Titusville, FL

Back to Top

Author’s Response

Perhaps I should have been clearer. Computer science deserves equal status with mathematics, chemistry, biology, and physics in terms of "science curriculum." For most students who intend to go on to college, some number of science courses is required. To the same degree other science classes are required, computer science should count as one of those that can fulfill such a requirement. Course content should include programming, not simply use of computer-based applications.

Vinton G. Cerf, ACM President

Back to Top

Leverage Credentials to Limit Liability Premiums

In response to Vinton G. Cerf’s editorial "But Officer, I Was Only Programming at 100 Lines Per Hour!" (July 2013), I would like to add that as long as physical damage is caused by machines, vehicles, and medical devices, their manufacturers can be held liable, whether or not a software problem is the root cause. If damage is due to addon software (such as an app running down the batteries of a navigation device or an app crashing a car by interfering with car-to-car communication), the software manufacturer could be liable. When liability insurance premiums can be lowered by demonstrating professional qualifications, the option of acquiring personal or staff certification will be embraced by software development professionals, though, perhaps, not enthusiastically.

Ulf Kurella, Regensburg, Germany

I view myself as a "formal methodist" in the professional world of software engineering, though reality is less grand. I "manage" the application of formal methods and also (try to) sell the formal-methods concept to upper management in my organization, in part by quoting Vinton G. Cerf’s fellow Turing-Award laureates E.W. Dijkstra and C.A.R. Hoare, in addition to A.M. Turing himself. (It is, of course, no reflection on these immortals that the success of my pitch is modest at best.) I also quote Cerf, context-free (July 2013): "[N]o one really knows how to write absolutely bug-free code…" and repeat the same disclaimer regarding my own (lack of) impact.


With the likely advances in artificial intelligence and robotics, work will eventually not be essential at all, at least not for physical sustenance.


Rumor has it IEEE will soon adopt a professional-engineerlicenseinsoftware engineering. I hold its purported precursor, the IEEE Certified Software Development Professional (http://www.computer.org/portal/web/certification/csdp), but do not recall whether I answered the single formal-methods question, out of 180 questions, correctly in 2009 when I took and apparently passed the test.

The back-and-forth between ACM and IEEE regarding licensure and certification is interesting and probably necessary. I thank Cerf for eschewing any shrillness while providing much-appreciated humor.

George Hacken, Wayne, NJ

Back to Top

Work Still Worth Doing

My essay "The World Without Work," included in the 1997 book Beyond Calculation, curated by Peter J. Denning for ACM’s golden anniversary, weighed many of the issues Martin Ford addressed more recently in his Viewpoint "Could Artificial Intelligence Create an Unemployment Crisis?" (July 2013) where he wrote, "Many extremely difficult issues could arise, including finding ways for people to occupy their time and remain productive in a world where work was becoming less available and less essential." I would go further; with the likely advances in artificial intelligence and robotics, work will eventually not be essential at all, at least not for physical sustenance. Our stubbornly high levels of unemployment, likely to rise dramatically in the future, already reflect this prospect.

Certain classes of people (such as the elderly, the young, and the disabled) are generally already not expected to work. Why would anyone be required to work at all? Why not just have the government provide some basic level of support for the asking, with work something people do only to go beyond that basic level or for their own satisfaction? We are not far from this being feasible, solving the unemployment problem in a single stroke. The cost, though enormous, would be covered by harvesting the additional wealth created by society’s overall increased productivity and lowered costs, the so-called Wal-Mart effect, taken to the limit. Though tax rates would have to increase, after-tax purchasing power would not have to decrease. The remaining work would be more highly compensated, as it should be.

What work is already not being done? Consider the prompt filling of potholes in our streets. If we can have self-driving cars, we can certainly have automated pothole-fillers.

Ford also said, "If, someday, machines can match or even exceed the ability of a human being to conceive new ideas… then it becomes somewhat difficult to imagine just what jobs might be left for even the most capable human workers." But some jobs will never be automated. Consider ballet dancers and preachers. With ballet dancers the audience is captivated by how gracefully they overcome the physical constraints of their bodies. An automated ballet dancer, no matter how graceful, would be nowhere near as interesting as a human one. Preachers inspire us by establishing emotional rapport; no robot, no matter how eloquent, will ever do that.

Paul W. Abrahams, Deerfield, MA

Back to Top

Corrections

The second author (following L. Irani) of the article "Turkopticon: Interrupting Worker Invisibility in Amazon Mechanical Turk" (http://www.ics.uci.edu/~lirani/Irani-Silberman-Turkopticon-camready.pdf) cited in the "Further Reading" section of Paul Hyman’s news story "Software Aims to Ensure Fairness in Crowdsourcing Projects" (Aug. 2013) should have been M. Silberman, not M. Silverman. Also, though not included in Hyman’s article, it should be noted Silberman (of the University of California, Irvine) was a co-designer and primary builder (along with Irani) of the Turkopticon.

In Madhav Marathe et al.’s review article "Computational Epidemiology" (July 2013), the labels in Figure 2 (page 91) I(2) and I(3) were inadvertently switched; the epicurve should have been (1,1,1,2).

Back to Top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More