Georgia Institute of Technology robotics engineer Ronald Arkin has dedicated his life's work to the development of ethical battlefield robots embedded with a sense of guilt that could eventually make them more effective than human soldiers at reducing civilian casualties. Arkin says the robots would be designed to comply with internationally prescribed laws of war and rules of engagement. He describes the machines' guilt system "as a means of downgrading the robot's ability to engage targets if it is acting in ways which exceed the predicted battle damage in certain circumstances."
Researchers have established thresholds for analogs of guilt that cause the robot to eventually refuse to use certain types of weapons or refuse to use weapons altogether if battlefield conditions reach a point where the predictions it is making are intolerable by its own standards, Arkin says. It is his theory that ethical unmanned systems can potentially perform more humanely than soldiers in battle, and he mentions counter sniper operations or the capture of buildings as possible applications where, with enough morality engineered into them, they might outperform people in this regard.
Arkin says one of the issues that could lead to errors by ethical robots is the question of responsibility. "We have worked hard within our system to make sure that responsibility attribution is as clear as possible using a component called the 'responsibility advisor,' " he says. Arkin built an override mechanism for the robot's guilt system, but the machine would still force responsibility on human users by alerting them of the potential ethical infractions it believes it could commit by following a certain course of action.
From "Q&A: Robotics Engineer Aims to Give Robots a Humane Touch"
CNet (07/08/09) Kerr, Dara
View Full Article