Home → Opinion → Articles → Is Multicore Hardware For General-Purpose Parallel... → Abstract

Is Multicore Hardware For General-Purpose Parallel Processing Broken?

By Uzi Vishkin

Communications of the ACM, Vol. 57 No. 4, Pages 35-39
10.1145/2580945

[article image]


In the recent decade, most opportunities for performance growth in mainstream general-purpose computers have been tied to their exploitation of the increasing number of processor cores. Overall, there is no question that parallel computing has made big strides and is being used on an unprecedented scale within companies like Google and Facebook, for supercomputing applications, and in the form of GPUs. However, this Viewpoint is not about these wonderful accomplishments. A quest for future progress must begin with a critical look at some of the current shortcomings of parallel computing, which is the aim of this Viewpoint. This will hopefully stimulate a constructive discussion on how to best remedy the shortcomings toward what could ideally become a parallel computing golden age.

Current-day parallel architectures allow good speedups on regular programs, such as dense-matrix type programs. However, these architecture are mostly handicapped on other programs, often called "irregular," or when seeking "strong scaling." Strong scaling is the ability to translate an increase in the number of cores to faster runtime for problems of fixed input size. Good speedups over the fastest serial algorithm are often feasible only when an algorithm for the problem at hand can be mapped to a highly parallel, rigidly structured program. But, even for regular parallel programming, cost-effective programming remains an issue. The programmer's effort for achieving basic speedups is much higher than for basic serial programming, with some limited exceptions for domain-specific languages where this load is somewhat reduced. It is also worth noting that during the last decade innovation in high-end general-purpose desktop applications has been minimal, perhaps with the exception of computer graphics; this is especially conspicuous in comparison to mobile and Internet applications. Moreover, contrasting 2005 predictions by vendors such as Intel2 that mainstream processors will have several hundred cores ("paradigm shift to a many-core era") by 2013 with the reality of around a handful; perhaps the reason was that the diminished competition among general-purpose desktop vendors in that timeframe did not contribute to their motivation to take the risks that such a paradigm shift entails. But, does this mean the hardware of these computers is broken?

0 Comments

No entries found