Opinion
Letters to the editor

How to Think About Objects

Posted
  1. Introduction
  2. Evaluating Research: Hypercriticality vs. Radical Empiricism
  3. Conclude with the Conclusions
  4. For Electronic Health Records, Don't Ignore VistA
  5. Correction
  6. Correction
  7. Footnotes
Letters to the Editor

Though I agree with Mordechai Ben-Ari’s Viewpoint “Objects Never? Well, Hardly Ever!” (Sept. 2010) saying that students should be introduced to procedural programming before object-oriented programming, dismissing OOP could mean throwing out the baby with the bathwater.

OOP was still in the depths of the research labs when I was earning my college degrees. I was not exposed to it for the first few years of my career, but it intrigued me, so I began to learn it on my own. The adjustment from procedural programming to OOP wasn’t just a matter of learning a few new language constructs. It required a new way of thinking about problems and their solutions.

That learning process has continued. The opportunity to learn elegant new techniques for solving difficult problems is precisely why I love the field. But OOP is not the perfect solution, just one tool in the software engineer’s toolbox. If it were the only tool, we would run the risk of repeating psychologist Abraham Maslow’s warning that if the only tool you have is a hammer, every problem tends to look like a nail.

Learning any new software technique—procedural programming, OOP, or simply what’s next—takes time, patience, and missteps. I have made plenty myself learning OOP, as well as other technologies, and continue to learn from and improve because of them.

For his next sabbatical, Ben-Ari might consider stepping back into the industrial world for a year or two. We’ve learned a great deal about OOP since he left for academia 15 years ago.

Jim Humelsine, Neptune, NJ

Back to Top

Evaluating Research: Hypercriticality vs. Radical Empiricism

In his Viewpoint “Is Computer Science Truly Scientific?” (July 2010), Gonzalo Génova suggested that computer science suffers from “radical empiricism,” leading to rejection of research not supported by empirical evidence. We take issue with both his claim and (perhaps ironically) the evidence he used to support it.

Génova rhetorically asked “Must all scientific works be reasoned and demonstrable?,” answering emphatically, “Yes, of course,” to which we whole-heartedly agree. Broadly, there are two ways to achieve this goal: inference and deduction. Responding to the letter to the editor by Joseph G. Davis “No Straw Man in Empirical Research” (Sept. 2010, p. 7), Génova said theoretical research rests on definition and proof, not on evidence. Nonetheless, he appeared to be conflating inference and deduction in his argument that seminal past research would be unacceptable today. Many of the famous computer scientists he cited to support this assertion—Turing, Shannon, Knuth, Hoare, Dijkstra—worked (and proved their findings) largely in the more-theoretical side of CS. Even a cursory reading of the latest Proceedings of the Symposium on Discrete Algorithms or Proceedings of Foundations of Computer Science turns up many theoretical papers with little or no empirical content. The work of other pioneers Génova cited, including Meyer and Gamma, might have required more empirical evidence if presented today. Génova implied their work would not be accepted, and we would therefore be unable to benefit from it. The fact that they met the requirements of their time but (arguably) not of ours does not mean they would not have risen to the occasion had the bar been set higher. We suspect they would have, and CS would be none the poorer for it.

Génova’s suggestion that CS suffers today from “radical empiricism” is an empirical, not deductive, claim that can be investigated through surveys and reviews. Still, he supported it via what he called “inductive justification,” which sounds to us like argument by anecdote. Using the same inductive approach, conversations with our colleagues here at the University of California, Davis, especially those in the more theoretical areas of CS, lead us to conclude that today’s reviews, though demanding and sometimes disappointing, are not “radically empirical.” To the extent a problem exists in the CS review process, it is due to “hypercriticality,” as Moshe Y. Vardi said in his “Editor’s Letter” (July 2010, p. 5), not “radical empiricism.”

Earl Barr and Christian Bird, Davis, CA

*  Author’s Response:

I’m glad to hear from Barr and Bird that there are healthy subfields in CS in this respect. I used “inductive justification” to support the claim that many classical works in the field are more theoretical and speculative than experimental, not to support an argument that CS suffers today from “radical empiricism.” Investigating the latter through exhaustive empirical surveys of reviews would require surveyors being able to classify a reviewer as a “radical empiricist.” If my column served this purpose, then I am content with it.

Gonzalo Génova, Madrid, Spain

Back to Top

Conclude with the Conclusions

The Kode Vicious Viewpoint “Presenting Your Project” by George V. Neville-Neil (Aug. 2010) made several debatable points about presentations, one of which was inexcusable: “…I always end with a Questions slide.”

You have just given a 25-minute technical presentation to an educated, knowledgeable, technical audience. Using a series of slides, you have explained your problem, described your solutions, discussed your experiments, and finally concluded, displaying each slide for a minute or two. Your penultimate slide summarizes the whole presentation, including its “takeaway” message—everything you want your listeners to remember. Now you expect to spend four or five minutes answering questions. The slide you show as you answer will be on screen two or three times longer than any other slide.

So why remove the most useful slide in the whole presentation—the summary—and replace it with a content-free alternative showing perhaps a word or two. Is your audience so dense it cannot hear you say “Thank you” or ask for questions unless they’re on the screen? Do you think the audience will forget to say something? Or is the problem with you, the presenter? Would you yourself forget to ask for questions if the slide wasn’t on the screen in front of you?

Technical presentations should be held to a higher standard of information content and knowledge transfer than a sales pitch. My advice: Remove the “Thank You” and “Questions” slides, and leave up your “Conclusions” and “Summary” as long as possible.

Michael Wolfe, Hillsboro, OR

Back to Top

For Electronic Health Records, Don’t Ignore VistA

Why did Stephen V. Cantrill’s article “Computers in Patient Care: The Promise and the Challenge” (Sept. 2010) say nothing about the Veterans Health Information Systems and Technology Architecture (VistA) used for decades throughout the U.S. Department of Veterans Affairs (VA) medical system for its patients’ electronic medical records? With 153 medical centers and 1,400 points of care, the VA in 2008 delivered care to 5.5 million people, registering 60 million visits (http://www1.va.gov/opa/publications/factsheets/fs_department_of_veterans_affairs.pdf).

In his book The Best Care Anywhere (http://p3books.com/bestcareanywhere) Phillip Longman documented VistA’s role in delivering care with better outcomes than national averages to a population less healthy than national averages at a cost that has risen more slowly than national averages. Included was a long list of references (more than 100 in the 2010 second edition), yet Cantrill wrote “Although grand claims are often made about the potential improvements in the quality of care, decreases in cost, and so on, these are very difficult to demonstrate in a rigorous, scientific fashion.”

Public-domain VistA also generalizes well outside the VA. For example, it has been deployed in the U.S. Indian Health Service, with additional functionality, including pediatrics. Speaking at the 2010 O’Reilly Open Source Convention (http://www.oscon.com/oscon2010/public/schedule/detail/15255), David Whiles, CIO of Midland Memorial Hospital, Midland, TX, described his hospital’s deployment of VistA and how it has since seen a reduction in mortality rates of about two per month, as well as a dramatic 88% decrease in central-line infections entering at catheter sites (http://www.youtube.com/watch?v=ExoF_Tq14WY). Meanwhile, the country of Jordan (http://ehs.com.jo) is piloting an open source software stack deployment of VistA to provide electronic health records within its national public health care system.

[In the interests of full disclosure, I am an active member of the global VistA community, co-founding WorldVistA in 2002 (http://worldvista.org), a 501(c)(3) promoting affordable health care IT through VistA. Though now retired from an official role, I previously served as a WorldVistA Director.]

K.S. Bhaskar, Malvern, PA

*  Author’s Response:

I appreciate Bhaskar’s comments about the VA’s VistA medical information system and applaud his efforts to generate a workable system in the public domain, but he misunderstood the intent of my article. It was not to be a comparison of good vs. bad or best vs. worst, but rather a discussion of many of the endemic issues that have plagued developers in the field since the 1960s. For example, MUMPS, the language on which VistA is based, was developed in the early 1970s for medical applications; VistA achieved general distribution in the VA in the late 1990s, almost 30 years later. Why so long? I tried to address some of these issues in the article. Also, VistA does not represent an integrated approach, but rather an interfaced approach with several proprietary subsystems.

Stephen V. Cantrill, M.D., Denver

Back to Top

Correction

The tribute “Robin Milner: The Elegant Pragmatist” by Leah Hoffmann (June 2010) requires a small correction. It said Milner “served as the first chair of the University of Cambridge Computer Laboratory.” For all his many gifts, however, Milner was not precocious enough to have run a university laboratory at age three. He was born in 1934, and the University of Cambridge Computer Laboratory was founded (as the Mathematical Laboratory) in 1937. The source of the error is apparently Milner’s official obituary, which noted he “held the first established Chair in Computer Science at the University of Cambridge.” Translation to American: He held the first endowed professorship. In Britain, the word “Chair” refers to a professorship and should not be confused with “Head of Department.”

Lawrence C. Paulson, Cambridge, England

Back to Top

Correction

Tom Geller’s news story “Beyond the Smart Grid” (June 2010) should have cited Froehlich, J. Larson, E., Campbell, T., Haggerty, C., Fogarty, J., and Patel, S. “HydroSense: Infrastructure-Mediated Single-Point Sensing of Whole-Home Water Activity in Proceedings of UbiComp 2009 (Orlando, FL, Sept. 30-Oct. 3, 2009) instead of Patel, S.N., Reynolds, M.S., and Abowd, G.D. “Detecting Human Movement By Differential Air Pressure Sensing in HVAC System Ductwork: An Exploration in Infrastructure-Mediated Sensing” in Proceedings of Pervasive 2008, the Sixth International Conference on Pervasive Computing (Sydney, Australia, May 19–22, 2008). We apologize for this error.

Back to Top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More