Home → Magazine Archive → February 2016 (Vol. 59, No. 2) → Expect 'Ungoverned Actors' to Use AI-Supported Weapons... → Abstract

Expect 'Ungoverned Actors' to Use AI-Supported Weapons, Too

By CACM Staff

Communications of the ACM, Vol. 59 No. 2, Pages 8-9

[article image]

Both sides of the Point/Counterpoint "The Case for Banning Killer Robots" (Dec. 2015) over lethal autonomous weapons systems (LAWS) seemed to agree the argument concerns weapons "... that once activated, would, as Stephen Goose wrote in his "Point," be able to select and engage targets without further human involvement." Arguments for and against LAWS share this common foundation, but where Goose argued for a total ban on LAWS-related research, Ronald Arkin, in his "Counterpoint," favored a moratorium while research continues. Both sides accept international humanitarian law (IHL) as the definitive authority concerning whether or not LAWS represents a humane weapon.

If I read them correctly, Goose's position was because LAWS would be able to kill on their own initiative they differ in kind from other technologically enhanced conventional weapons. That difference, he said, puts them outside the allowable scope of IHL and therefore ought to be banned. Arkin agreed LAWS differs from prior weapons systems but proposed the difference is largely their degree of autonomy and their lethal capability can be managed remotely when required. Arkin also said continued research will improve deficiencies in LAWS, thereby likely reducing the number of noncombatant casualties.


No entries found