Home → Opinion → Articles → Human-Like Programs Abuse Our Empathy  → Full Text

Human-Like Programs Abuse Our Empathy

By The Guardian

June 16, 2022

[article image]

Google engineer Blake Lemoine's misconception about the LaMDA chatbot's sentience shows the risks of designing systems in ways that convince humans they see real, independent intelligence in a program. If we believe that text-generating machines are sentient, what actions might we take based on the text they generate?

That is why we must demand transparency, especially in the case of technology that uses human-like interfaces such as language. For any automated system, we need to know what it was trained to do, what training data was used, who chose that data, and for what purpose.

From The Guardian
View Full Article


No entries found