Opinion
Artificial Intelligence and Machine Learning Letters to the editor

Teach the Law (and the AI) ‘Foreseeability’

Posted
  1. Introduction
  2. Author Responds:
  3. Political Correctness, Here, Too
  4. Editor-In-Chief Responds:
  5. Author Responds:
  6. De-Identify My Patient Data or You Can't Have It
  7. References
  8. Footnotes
Letters to the Editor

Ryan Calo’s "law and Technology" Viewpoint "Is the Law Ready for Driverless Cars?" (May 2018) explored the implications, as Calo said, of " … genuinely unforeseeable categories of harm" in potential liability cases where death or injury is caused by a driverless car. He argued that common law would take care of most other legal issues involving artificial intelligence in driverless cars, apart from such "foreseeability."

Calo also said the courts have worked out problems like AI before and seemed confident that AI foreseeability will eventually be accommodated. One can agree with this overall judgment but question the time horizon. AI may be quite different from anything the courts have seen or judged before for many reasons, as the technology is indeed designed to someday make its own decisions. After the fact, it may be impossible to ascertain the reasons for or logic behind its decisions.

AI is a sort of idiot savant that can be unpredictably, and potentially, dangerously literal. Calo gave an example of a driverless car instructed to maximize efficiency and decide that having a fully charged battery would be the best way to achieve it. The car kept its engine running in the garage of a house overnight and, in doing so, asphyxiated its human occupants. This is an example of the so-called "paper-clip problem," whereby an AI is programmed with its sole objective to make paper clips. When it runs out of metal wire, it begins to make them out of anything else it can find. Recall how the HAL 9000 computer in Stanley Kubrick’s and Arthur C. Clarke’s 2001: A Space Odyssey let nothing interfere with its mission objective, including, tragically, its human astronauts.

AI software designers are still so new at developing AI it will be difficult for them to predict what could happen as it is deployed in the real world. Manufacturers and designers using AI compete in an environment where market share and profitability almost always drive product development and release, more than any study of potential outcomes. MIT physics professor Max Tegmark has insightfully explored such "bugs" in the application of current technology.1

As liability cases are litigated, courts in different jurisdictions, following a similar set of facts and circumstances, may produce very different judgments. If the manufacturer claims particular AI software is proprietary, determining what led the software to make a particular decision might be futile.

AI is a field of information technology the average person, including owners of AI-equipped cars and members of a jury, can barely grasp, much less evaluate. Further study of foreseeability could only benefit the technology, as well as the law.

Evelyn McDonald, Fernandina Beach, FL, USA

Back to Top

Author Responds:

I appreciate this thoughtful response. The paper-clip problem has always fascinated me when offered as evidence of the supposed existential threat AI poses to humanity. The problem envisions a system so limited that it blindly follows a single objective function—making paper clips—but is simultaneously so powerful, intelligent, and versatile that it overcomes the sum of human resistance. Regardless, I completely agree with McDonald’s central takeaway that we cannot know how AI will be deployed in practice in the years to come.

Ryan Calo, Seattle, WA, USA

Back to Top

Political Correctness, Here, Too

I sympathize with Bob Toxen’s position, as outlined in his letter to the editor "Get ACM (and Communications) Out of Politics" (May 2018). Meanwhile, writing as if to lend additional support to Toxen’s critique, Moshe Y. Vardi wrote in his Vardi’s Insight column "How We Lost the Women in Computing" (also in May 2018) that women were being pushed out of computing. I found this claim too harsh and also unjust. To me, it involves intention. And Vardi’s argumentation looks to me more political, or politically correct, than scientific. Of course we need more women in computing, and yes, one can be biased without recognizing one’s own bias. Still, my personal experience, on hiring committees at the University of Michigan and at Microsoft, is that computer scientists try very hard to bring women on board.

There are interesting parallels between U.S. and Soviet political correctness. In the late 1960s, I was the chair of the mathematics department in Sverdlovsk Institute for National Economy—a Soviet university—responsible for the entrance exams in mathematics. The rector of the university pressed for increasing the percentage of accepted students from the working classes, as opposed to the intelligentsia. In principle I liked the idea. My parents were laborers. The question was how to achieve the goal. I suggested a division of labor: We, the mathematicians, would grade the exams on merit, as usual, and the administration would accept whomever it deemed appropriate. I also suggested that we offer remedial courses for working-class high-school students to prepare them for the rigors of university-level mathematics. But the rector would have none of it. He wanted us to grade on merit and somehow simultaneously increase the percentage of working-class students. The pressure came from above. Higher authorities wanted to increase the percentage of working-class students. But even in the USSR, nobody accused us of pushing out the group of people in question.

The issue of this letter is bigger than gender equality or the Soviet experience. It is about political correctness. Responding to Toxen, Vardi wrote: "Communications is definitely not only about computers and programming." I still like Toxen’s idea of taking Communications out of politics. But if we have to debate a political issue, it should be done constructively, without exaggerating or imputing intentions that people may not have.

Yuri Gurevich, Ann Arbor, MI, USA

Back to Top

Editor-In-Chief Responds:

It is good to see a debate has broken out in Communications. Issues concerning the health and inclusiveness of the computing professional community must be at the heart of what ACM does and is concerned about. I find the topic entirely appropriate and welcome thoughtful perspectives representing different points of view. In the larger scope, it is clear that computing is far too important to science, education, commerce, society, and government for Communications to take a narrow view. It must engage the multiplying issues of how computing is transforming the world, for the good, and, yes, sometimes for the worse, as well as how the world shapes computing.

Andrew A. Chien, Editor-In-Chief, Communications of the ACM

Back to Top

Author Responds:

Gurevich argues that to justify the claim that women have been pushed out of computing I had to bring evidence of intention. But my claim was about action, not intention, and I did bring evidence for that, as much as possible within the confine of a one-page column. Gurevich also insinuates that women have less talent in computing than men. Factual evidence would show this insinuation to be false.

Moshe Y. Vardi, Houston, TX, USA

Back to Top

De-Identify My Patient Data or You Can’t Have It

Computational de-identification techniques, many with near-perfect performance, increase patient privacy and reduce the potential for abuse of patient data, even in the most complex security situations (such as with hospital discharge summaries).3 Although they perform well with patient data, applying them in real-life hospital and insurance systems remains a challenge. Samuel Greengard’s news story "Finding a Healthier Approach to Managing Medical Data" (May 2018) emphasized the importance of maintaining patient privacy and deserves credit for bringing it to the attention of Communications readers, as well as to public-health researchers.

But could it be that the most advanced methods of protecting patient data are not actually as effective as Greengard seemed to assume? Imagine two repositories of patient data, as one would find in most hospital and insurance systems today. The first is the original and the second a highly protected de-identified representation of the original. If a corrupt employee or malicious hacker wanted to access and redistribute or sell patient data, which source would they be more likely to target? The hacker would obviously go for the raw data of the first source, which is already fully accessible to a large number of hospital personnel, including nurses, doctors, and technical staff. Moreover, detailed patient medical claims for almost everyone are already available to insurance companies. Included are personal and demographic details, as well as clinical information like procedures, co-morbidities, and laboratories. A person with malicious intent could thus use the simplest technologies (such as a mobile hard drive or an email or FTP server) to transfer sensitive data of potentially hundreds of millions of patients from secure servers to unsecured laptops.

Public-health researchers and other data scientists often use de-identified patient data rather than the original patient data, as Greengard discussed, limiting their ability to produce credible studies; for instance, removing specific dates of death from a patient record would preclude their ability to study liver complications, as the standard outcome in this domain is the patient’s short-term mortality.2

As Greengard suggested regarding the benefit to public health from analyzing such data, lawmakers should look to impose significant penalties on anyone convicted of abusing data, rather than require medical professionals to de-identify it, as is currently the case. Data scientists would be better off if the data is left raw (legally) in its most unadulterated form rather than be de-identified, achieving improved accuracy of scientific findings that could directly improve patient care.

Uri Kartoun, Cambridge, MA, USA

Back to Top

Back to Top

    1. Tegmark, M. The near future: Breakthroughs, bugs, laws, weapons, and jobs. Chapter 3 in Life 3.0: Being Human in the Age of Artificial Intelligence. Alfred A. Knopf, New York, 2017, 93–110.

    2. Kartoun, U., Corey, K., Simon, T., Zheng, H., Aggarwal, R., Ng, K., and Shaw, S. The MELD-Plus: A generalizable prediction risk score in cirrhosis. PLOS ONE 12, 10 (Oct. 2017). http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0186301

    3. Uzuner, O., Luo, Y., and Szolovits, P. Evaluating the state of the art in automatic de-identification. Journal of the American Medical Informatics Association 14, 5 (June 2007), 550–563.

    Communications welcomes your opinion. To submit a Letter to the Editor, please limit yourself to 500 words or less, and send to letters@cacm.acm.org.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More