Opinion
Computing Applications Forum

Forum

Posted
  1. Rights and Wrongs in Scientific Publications
  2. Give Users More Than Propaganda
  3. Define Accountability in Software Development
  4. Stick to Public Safety, Not Business Software, in Laws and Regulations
  5. Author

In "Self-Plagiarism in Computer Science" (Apr. 2005), Christian Collberg and Stephen Kobourov discussed a particular type of ethical misconduct—self-plagiarism—in scientific publication. I agree with the opinion they quoted, that "real plagiarism is much worse (than self-plagiarism)." False citation is another vice; for example, "this is an important problem [1]," but [1] supports no such claim. Detection is a burden on referees, but I tend to agree that no one else can do the job.

Collberg and Kobourov concentrated on self-plagiarism, which may be easier to detect (with the help of their automated tool), but they exaggerated their condemnation of self-plagiarism, an issue highlighted by the comment in the May Forum by Jonathan Grudin concerning the article. One of Grudin’s main points involved journal republication of work previously presented in conferences. At least in theoretical computer science, "refereed" conferences are refereed in a way far more superficial than would be expected in journal papers. Moreover, conference proceedings impose a page limit, so most such papers are relatively abbreviated by, say, leaving out some proofs.

The fact that conference papers are easily accessible from and archived in online libraries does not make republication of work in journals any less important. Under these circumstances, the verbatim inclusion of text from a conference proceedings in a journal version of the same work is far from misconduct.

Another comment cited by Collberg and Kobourov (though they seemed to disregard it) was "…CS papers should be largely self-contained, and that inevitably means duplication." Indeed, if you have written an excellent introduction to a topic or a summary of previous work, what exactly is "detrimental to science" in repeating it verbatim (up to the reference to your own first paper) in a later paper on the same topic to make the new paper more complete and self-contained?

Collberg and Kobourov did not provide a fair discussion of rights and wrongs in scientific publication. Still, their statement "we should hold ourselves to the same high standards as we do our students" is, unfortunately, very much on point. Yet another salient point requiring further discussion is the question: "What should the consequences of such a misconduct be?" Under current practices, the worst retribution is rejection of the paper, which can then promptly be resubmitted to another journal or conference.

Amir Ben-Amram,
Tel Aviv, Israel

Authors Respond:

Ben-Amram points to one of the main problems in distinguishing fair textual reuse from self-plagiarism. A journal version of a conference paper likely does not constitute self-plagiarism, provided the journal version contains a reference to the original paper, allowing the journal editors to determine whether the duplication falls within the journal’s own standards.

Not so clear is the case of identical conference and journal versions with different titles, especially if the journal version contains no reference to the conference version. Views about recycling a well-written introduction and/or related work sections vary dramatically, as indicated by the answers we received to our survey questions. One computer scientist found this practice "not a big deal," while another wrote "I think this is very disturbing." We did not set out to determine the "rights and wrongs in scientific publications" but rather to open a discussion about the fuzzy boundary between fair textual reuse and self-plagiarism.

Christian Collberg
Stephen Kobourov
Tucson, AZ

Back to Top

Give Users More Than Propaganda

In "Designing Sticky Knowledge Networks" (May 2005), Ashley A. Bush and Amrit Tiwana wrote that "System designers must recognize that user perceptions matter, and that they can be manipulated through thoughtful system design. The key is to make each user’s contributions appear to be indispensable." I find manipulating people wrong and distasteful. Enhancing a system’s value and appeal is one thing; advocating propaganda is another. Perhaps it was not Bush’s and Tiwana’s intent, but I suggest the editors exercise a bit more control in this area.

Alex Simonelis
Montreal

Authors Respond:

Simonelis’s interpretation of the word "manipulated" is different from the meaning we intended: "to move, arrange, operate, or control…in a skillful manner" (American Heritage Dictionary, Fourth Edition). We suggested that users be given feedback that raises otherwise-obscured awareness of the value they derive from a knowledge network—nothing more. Such skillful design is no different in spirit from traffic lights showing only one color at a time, highway signs conveying direction, or elevators informing riders which floor they’re on. Similarly, selectively providing real-time feedback to a user is an instance where less information is worth more to making the system sticky.

Ashley A. Bush
Tallahassee, FL
Amrit Tiwana
Ames, IA

Back to Top

Define Accountability in Software Development

I applaud David A. Patterson’s Security, Privacy, Usability, and Reliability, or SPUR, initiative described in his President’s Letter ("20th Century vs. 21st Century C&C: The SPUR Manifesto," Mar. 2005) and continue to be appalled by the lack of accountability for the security, reliability, and safety of embedded software systems. These three attributes define the trustworthiness of software. Software professionals might be untrusted because their work often leads to untrustworthy products. The president of the Center for National Software Studies Alan Salisbury has said, "For far too long we have simply accepted poor quality software as a fact of life." The organization has issued a call for action aligned with SPUR.

I propose that every software product have a named software architect who warrants that the product solves the problem for which it was designed and a software project manager who warrants that good practices were followed in its development. The release documentation would include a statement describing the testing, processes, and technologies used by these named professionals and another statement describing their qualifications.

Larry Bernstein
Hoboken, NJ

Back to Top

Stick to Public Safety, Not Business Software, in Laws and Regulations

As a manager of software projects, I often seek analogies to explain my team’s work to people more familiar with, say, hardware projects. Many of the questions Phillip G. Armour raised in his "The Business of Software" column ("Sarbanes-Oxley and Software Projects," June 2005) are easily applied to domains other than software. Armour emphasized how organizations measure, track, and assess the risk in their software project investments. This same process can also be applied to, say, investing in reporters to write articles for a newspaper that aims to gain readers and sell space to advertisers. Do the articles measure up and deliver value to the organization? It might also be applied to a restaurant chain in which the key to business success is clearly its recipes and the skill of its chefs, not just its inventoried food stocks, chairs, and tables. It must ask itself whether it has properly assessed its investment in recipes and chefs, as well as the more generic business risks?

Software is in no way different from such endeavors. It is just as much a product of a company in which investment in human specialists and managers is critical to success.

It would be foolish to expect the Sarbanes-Oxley Act of 2002, or SOX, would guarantee business success. Formal regulation and controls are better applied in the realm of public safety. An insightful article concerning government regulation of steam boilers (following deaths and injuries caused by boiler explosions) in IEEE Computer "High-Pressure Steam Engines and Computer Software" (Oct. 1994) by Nancy G. Leveson was especially relevant, as more and more public services (or recipes or newspaper articles) depend on software and involve the risk of injury. The government can step in and regulate for public safety. It seems to me there’s a lot more merit down this line than SOX seems to promise.

Rich Altmaier
Cupertino, CA

Back to Top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More