Home → News → When It Comes to Robots, Reliability May Matter More... → Full Text

When It Comes to Robots, Reliability May Matter More Than Reasoning

By U.S. Army

September 30, 2019

[article image]


A study by U.S. Army Research Laboratory and University of Central Florida found that human confidence in robots decreases after a robot makes a mistake, even when the robot's reasoning process is transparent.

The researchers explored human-agent teaming to define how the transparency of the agents, such as robots, unmanned vehicles, or software agents, impacts human trust, task performance, workload, and agent perception. Subjects observing a robot making a mistake downgraded its reliability, even when it did not make any subsequent mistakes.

Boosting agent transparency improved participants' trust in the robot, but only when the robot was collecting or filtering data.

"Understanding how the robot's behavior influences their human teammates is crucial to the development of effective human-robot teams, as well as the design of interfaces and communication methods between team members," says Julia Wright of the Army Research Laboratory.

From U.S. Army
View Full Article

 

Abstracts Copyright © 2019 SmithBucklin, Washington, DC, USA

0 Comments

No entries found