Home → Opinion → Articles →  The Danger of Anthropomorphic Language in Robotic... → Full Text

The Danger of Anthropomorphic Language in Robotic AI Systems

By Brookings TechStream

June 29, 2021

[article image]

When describing the behavior of robotic systems, we tend to rely on anthropomorphisms: cameras "see," decision algorithms "think," and classification systems "recognize." But the use of such terms creates expectations and assumptions that often do not hold, especially in the minds of people who have no training in the underlying technologies involved.

Designing, procuring, and evaluating artificial intelligence (AI) and robotic systems that are safe, effective, and behave in predictable ways represents a central challenge in contemporary AI. Using a systematic approach in choosing the language that describes these systems is the first step toward mitigating risks associated with unexamined assumptions about AI and robotic capabilities.

From Brookings TechStream
View Full Article


No entries found