The August 2014 issue of IEEE Spectrum had two articles of interest related to computing: "Silicon's Second Act" and "Spin Memory Shows Its Might."
On top of that, in the last couple of years, IBM has demonstrated two remarkable achievements: The Watson Artificial Intelligence system and the August 8, 2014 cover story of Science entitled "Brain Inspired Chip." The TrueNorth chipset and the programming language it uses have demonstrated remarkable power efficiency compared to more conventional processing elements.
What all of these topics have in common for me is the prospect of increasingly unconventional computing methods that may naturally force us to rethink how we analyze problems for purposes of getting computers to solve them for us. I consider this to be a refreshing development, challenging the academic, research, and practitioner communities to abandon or adapt past practices and to consider new ones that can take advantage of new technologies and techniques.
It has always been my experience that half the battle in problem solving is to express the problem in such a way the solution may suggest itself. In mathematics, it is often the case that a change of variables can dramatically restructure the way in which the problem or formula is presented; leading one to find related problems whose solutions may be more readily applied. Changing from Cartesian to Polar coordinates often dramatically simplifies its expression. For example, a Cartesian equation for a circle centered at (0,0) is X2 + Y2 = Z2 but the polar version is simply r(φ)= a for some value of a.
It may prove to be the case that the computational methods for solving problems with quantum computers, neural chips, and Watson-like systems will admit very different strategies and tactics than those applied in more conventional architectures. The use of graphics processing units (GPUs) to solve problems, rather than generating textured triangles at high speed, has already forced programmers to think differently about the way in which they express and compute their results. The parallelism of the GPUs and their ability to process many small "programs" at once has made them attractive for evolutionary or genetic programming, for example.
One question is: Where will these new technologies take us? We have had experiences in the past with unusual designs. The Connection Machine designed by Danny Hillis was one of the first really large-scale computing machines (65K one-bit processors) hyperconnected together. LISP was one of the programming languages used for the Connection Machines along with URDU, among others. This brings to mind the earlier LISP machines made by Symbolics and LISP Machines, Inc., among others. The rapid advance in speed of more conventional processors largely overtook the advantage of special purpose, potentially language-oriented computers. This was particularly evident with the rise of the so-called RISC (Reduced Instruction Set Computing) machines developed by John Hennessy (the MIPS system) and David Patterson (Berkeley RISC and Sun Microsystems SPARC), among many others.
David E. Shaw, at Columbia University, pioneered one of the explorations into a series of designs of a single instruction stream, multiple data stream (SIMD) supercomputer he called Non-Von (for "non-Von-Neumann"). Using single-bit arithmetic logic units, this design has some relative similarity to the Connection Machine although their interconnection designs were quite different. It has not escaped my attention that David Shaw is now the chief scientist of D.E. Shaw Research and is focused on computational biochemistry and bioinformatics. This topic also occupies his time at Columbia University, where he holds a senior research fellowship and adjunct professorship.
Returning to new computing and memory technologies, one has the impression the limitations of conventional use of silicon technology may be overcome with new materials and with new architectural designs as is beginning to be apparent with the new IBM Neural chip.
I have only taken time to offer an very incomplete and sketchy set of observations about unconventional computing in this column, but I think it is arguable that in this second decade of the 21st century, we are starting to see serious opportunities for rethinking how we may compute.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2014 ACM, Inc.