Opinion
Letters to the Editor

Toward a Map Interface Not Inherently Related to Geography

Posted
  1. Introduction
  2. Realistic Parallel Programming Productivity
  3. Authors' Response:
  4. Long May the Empty Flag Wave
  5. 'Not My Fault' Not Good Enough
  6. References
  7. Footnotes
Letters to the Editor

Hanan Samet et al.’s article "Reading News with Maps by Exploiting Spatial Synonyms" (Oct. 2014) on maps as the basis for reading news was an impressive and informative description of how a map-query interface should work. The details were enlightening, with NewsStand described as "a general framework for enabling people to search for information with a map-query interface." I am not sure if the authors intend to broaden their scope to include even perhaps non-geographically related information but hope they do.

NewsStand is, I think, part of the research behind spatial interfaces in general. I was similarly impressed when first learning about the spatial data management implementations at MIT in the late 1970s led by Richard Bolt.1 At the time, there was interest in exploiting spatial relationships as the main organizing interface. In the early 1980s, I became convinced the improving graphics of personal computers would allow spatial interfaces to become a useful and more mainstream way to access the full range of computer-stored information. I wrote two conference papers in 1984 and 1985 describing how graphical depictions of spatial orientations could be useful in finding information, including word maps for concepts. Several years later, I included screenshots of a geographic map interface using an early Macintosh to show how it might look when searching something as simple as a telephone directory.2 I went on to suggest that, in the same way maps of real geography provide a natural interface to information involving geographical relationships, "information maps" could likewise be a useful and popular interface to content not inherently related to geography.

However, in spite of this potential, such an interface has not emerged; indeed, it seems there is a strong movement against relying on spatial orientation. The best example might be today’s fascination with putting everything in "the cloud"; it does not matter where it is, just that it is out there, somewhere. The popular Google search interface is another example, with no spatial arrangement other than a list, except for Google Maps and Google Earth, which are specifically designed to find geographic locations.

As impressive as NewsStand is, for map-query interfaces to go beyond geography-based systems, I assume the information maps would have to involve the same familiarity as traditional geographical maps, along with their relationship to real routes to a goal. I am perhaps misguided in thinking some sort of map interface could eventually be the dominant window into the enormous mass of digital bits being stored out of view. At least maybe NewsStand is a helpful step in that direction.

Richard H. Veith, Port Murray, NJ

Back to Top

Realistic Parallel Programming Productivity

As a user and teacher of parallel programming, I question whether "[t]he time to … the first successful parallel run" on two cores is indeed a useful measure of the productivity of a language, the metric used by John T. Richards et al. in the study they reported in their article "A Decade of Progress in Parallel Programming Productivity" (Nov. 2014).

First, any notion of productivity must take into consideration the time it takes to ensure a program is correct. A successful run on two cores is hardly a guarantee of correctness. A better criterion would require passing a battery of tests on a range of inputs and processor counts, perhaps supplemented by formal verification. In my own experience using MPI and various threading models, I often find I can quickly "get something running" using threads, only to spend days debugging subtle race conditions. In contrast, using MPI may take longer at first, but subsequent debugging and verification is more straightforward.

The second issue with the metric is it does not address performance. Richards et al. said they did not measure performance. Why not? It would have been easy and provided insight into the trade-offs inherent in the choice of language. In parallel programming, a significant portion of the development effort goes to ensuring the program exhibits adequate runtime performance. Studies of productivity could take this time into account by, say, requiring the program to obtain some specified speedup using some specified (large) number of processors before it is deemed complete. (Richards et al. did mention X10 programs from another study that performed well, but presumably development of those programs did not end with their first successful two-core run.)

Without doubt, there is a need for parallel programming languages that are more productive, as well as more studies measuring productivity. But, to be persuasive, the studies must use metrics that address all phases of a realistic development process.

Stephen F. Siegel, Newark, DE

Back to Top

Authors’ Response:

There is certainly more to developing a parallel application than getting it to run on two cores against a substantial enough set of data to reasonably ensure correctness. Our study found substantial productivity gains across six application types during this early coding, debugging, and testing, and we suspect, but cannot prove, the gains will hold during later scaling, tuning, and verification. As the X10 community grows, we hope to see additional aspects of productivity and performance investigated.

John T. Richards, Jonathan Brezin, Calvin B. Swart, and Christine A. Halverson

Back to Top

Long May the Empty Flag Wave

In his "Cerf’s Up" column "Heidelberg Laureate Forum II" (Nov. 2014), Vinton G. Cerf explored Ivan Sutherland’s idea for detecting "empty" when "nothing has been stored in the register," highlighting the existence of an "’empty’ flag" (one bit) to carry the information that a register (or memory location) is empty. Such a flag would mean an N-bit register would require N+1 bits. As Cerf said, such "a register of N bits has 2N values," or not using half its capacity. Another way to achieve the desired result (without wasting capacity) would be to deem a particular bit configuration as signifying "empty." On systems employing "sign and magnitude" representation of numbers there is no need for both a "+0" and a "−0," as they have the same arithmetic properties. One can be used for all arithmetic operations, the other as the "empty flag." Even on systems employing "two’s complement arithmetic" the same bit configuration—leading bit "1," all other bits "0"—would signify an empty register. This bit configuration represents a negative number, one unit larger in magnitude than the largest positive number. Using it to signify "empty" removes the asymmetry in number range between positive and negative numbers. A high-speed "comparator" can easily provide an "interrupt" when a program fetches the bit configuration for an "empty" cell. A circuit consisting of an "inverter gate" (for the leading bit) and a "wired-OR" is all the system needs to generate a signal that the "empty flag" has been detected.

The concept of an "empty flag" is not new. Compilers have long generated code incorporating a comparison of a fetched variable with the bit configuration of an "empty flag" to detect the use of an uninitialized variable. A "compare" operation and a software "interrupt" on equality are all that is necessary.

Paul B. Schneck, Bala Cynwyd, PA

Back to Top

‘Not My Fault’ Not Good Enough

For as long as I can remember software being for sale, it has always come with a standard disclaimer, something like "This software is not guaranteed suitable for any purpose or to be free of defects. You use it at your own risk. And we, the producers of the software, are not responsible for anything, no matter what." Next comes a stipulation saying that in order to use the software, you must agree to these terms. This is all so the software producers can make more money, selling quickly developed, untested software full of bugs and security holes they claim is a good product, even when it is not.


‘And we, the producers of the software, are not responsible for anything, no matter what.’


Today we read about self-driving cars embedded with millions of lines of code for driving the car. The product liability lawyers of the world are unlikely to let software producers continue to have the luxury of not being responsible for anything, just because they say so or because a user agreed to their terms by clicking a button or by using the software.

People’s lives and property are at stake. If a wreck or a person’s death is caused by deficiencies or bugs in a car’s software, then, agreement or no agreement, the producers of the software should be held accountable. This is how things should have been from day one of software development but never have been.

Software producers will not willingly accept such a change. Instead, it will be forced upon them by their own financial losses, when lawyers start winning lawsuits against them, thereby undoing the producers’ self-granted "Get Out of Jail Free" card and negating their bogus claims of "I’m not responsible for anything."

Hal B. Lowe, Oswego, IL

Back to Top

Back to Top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More