Home → Magazine Archive → September 2018 (Vol. 61, No. 9) → Overtrust in the Robotic Age → Abstract

Overtrust in the Robotic Age

By Alan R. Wagner, Jason Borenstein, Ayanna Howard

Communications of the ACM, Vol. 61 No. 9, Pages 22-24
10.1145/3241365

[article image]


As robots complement or replace human efforts with more regularity, people may assume that the technology can be trusted to perform its function effectively and safely. Yet designers, users, and others must evaluate this assumption in a systematic and ongoing manner. Overtrust of robots describes a situation in which a person misunderstands the risk associated with an action because the person either underestimates the loss associated with a trust violation; underestimates the chance the robot will make such a mistake; or both.

We deliberately use the term "trust" to convey the notion that when interacting with robots, people tend to exhibit similar behaviors and attitudes found in scenarios involving human-human interactions. Placing one's trust in an "intelligent" technology is a growing phenomenon. In a sense, it is a more extreme version of automation bias, which is a tendency of people to defer to automated technology when presented with conflicting information.6 Early research on this issue focused primarily on autopilots and factory automation.7 But with advances in AI and the associated potential for significantly more sophisticated robots, humans may increasingly defer to robots. For example, an overarching ethical concern that we have sought to explore in our research is the prospect that children, their parents, and other caregivers might overtrust healthcare robots.2 In this column, we highlight two other near-term examples where overtrust of robots may become problematic: in emergency situations and in the operation of self-driving cars. We close with some recommendations that may help mitigate overtrust concerns.

0 Comments

No entries found