Opinion
Computing Applications Letters to the Editor

Accountability Is No Excuse For Surveillance

Posted
  1. Introduction
  2. Authors' Response
  3. To Counter Cybercrime, Try Security Diligence
  4. Correction
  5. Footnotes
Letters to the Editor

British philosopher and social theorist Jeremy Bentham would have wholeheartedly endorsed many of the accountability mechanisms Stefan Bechtold and Adrian Perrig outlined in their Viewpoint "Accountability in Future Internet Architectures" (Sept. 2014). It reminded me of Bentham’s Panopticon (late 18th century), a prison where the prisoners would be motivated to behave in a more civilized manner by being made to think they were always under surveillance. Likewise, Bechtold and Perrig took the view that network users being tracked and made accountable for their actions would improve the Internet.

I am certain the majority of governments today would endorse this architecture, in which it would be possible to trace all Internet Protocol communication packets from source to destination and guarantee everyone is using the network responsibly. Indeed, many governments already pursue such a goal.

On the other hand, I am concerned the pervasive monitoring already present in today’s global Internet without these technical aids might not be in society’s best interests. I have been working with U.S. State Dept. sponsorship aiding a user group of journalists and democracy advocates in African countries, many with authoritarian tendencies. I am developing anonymization tools and training participants to use them. In many of the countries, accountability for accessing information considered innocuous in the West has dire consequences. Many of those lacking human rights protections found in Western democracies indeed use the technology produced in the West.

I dislike the idea of making people accountable for the information they consume, which would be a by-product of the ideas Bechtold and Perrig proposed.

Richard R. Brooks, Clemson, SC

Back to Top

Authors’ Response

Like Brooks, we strongly support privacy and anonymity for users. However, we also strongly disagree with an interpretation of our Viewpoint that says we envision a future Internet architecture that tracks users. Our aim was (and is) more discerning. As we pointed out, it is sometimes possible to achieve both privacy and accountability, whereby users maintain their privacy and become accountable only if they violate some policy as by, say, perpetrating an attack. Moreover, anonymity can be achieved through an overlay network, even if the underlying network is accountable. We also highlighted the research challenges involved in balancing accountability, privacy, anonymity, political freedom, and other values. Brooks seems to have missed this core point.

Stefan Bechtold and Adrian Perrig, Zurich, Switzerland

Back to Top

To Counter Cybercrime, Try Security Diligence

Cormac Herley’s article "Security, Cybercrime, and Scale" (Sept. 2014) focused on logical analysis of narrowly defined financial cybercrimes gainfully performed by untrusted remote perpetrators, not by embezzlers. The objective Herley specified in this logical model is improved security to reduce the risk of rational financially motivated untrusted perpetrators able to carry out all possible scaled attacks.

Having interviewed more than 200 cybercrime perpetrators over the past 40 years, I suggest reality is quite different. First, perpetrators possess only partial knowledge. They also cause errors that change their objectives, take less financial assets than are available, do not necessarily consider cost, perform copy-cat attacks, and act under many other personal irrational conditions and circumstances that were always present in all of the cases I studied.


I dislike the idea of making people accountable for the information they consume.


Here is my threat model: Alice knows she cannot be sufficiently secure from attacks by Mallory and thus seeks to avoid negligence after Mallory (inevitably) attacks, successfully or not.

Herley correctly noted the limitations of successful risk reduction, but a different objective and strategy are more desirable for my model. The objective I advocate is security diligence, rather than risk reduction. It is a safer, more easily obtained and measured objective for the enterprise, more likely meets insurance requirements, and reduces a broader range of risk reduction that includes possibly reducing the risk of negligence on the part of the victim enterprise and the stakeholders within it. I have found (in practice) this is often more important than financial loss.

The diligence strategy is to implement security controls by engaging in benchmark studies, using standards, compliance, contracts, audits, good practices, available products, cost control, experts’ opinions, and experimentation. The tough high-cost decisions are made by management fiat, not necessarily by risk reduction.

Donn B. Parker, Los Altos, CA

Back to Top

Correction

The Milestones section "Computer Science Awards, Appointments" (Sept. 2014) reported Jack Dongarra of the University of Tennessee, Knoxville, as the recipient of the ACM-IEEE Computer Society Ken Kennedy Award. Allow us to clarify. Dongarra was the 2013 recipient. Charles E. Leiserson of the Massachusetts Institute of Technology was recently named the recipient of the award for 2014 (see page 14).

Back to Top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More