Home → News → AI Programs Exhibit Racial and Gender Biases, Research... → Full Text

AI Programs Exhibit Racial and Gender Biases, Research Reveals

By The Guardian

April 14, 2017

[article image]

Researchers at the University of Bath and Princeton University have demonstrated a revolutionary artificial intelligence (AI) tool that enables computers to interpret everyday language that exhibits racial and gender prejudices. Their study shows AI systems are digesting the deeply innate biases hidden within the patterns of language use.

The University of Bath's Joanna Bryson and Princeton's Arvind Narayanan concentrated on "word embedding," a machine-learning tool that helps computers make sense from natural language. They determined such tools more closely associate the words "female" and "woman" with arts and humanities and the home, while "male" and "man" were more deeply affiliated with math and engineering professions. The AI system also reflects racism, as it more commonly associates African American names with unpleasant words.

Bryson warns AI could reinforce existing prejudices because algorithms lack humans' ability to consciously counteract learned biases.

From The Guardian 
View Full Article


Abstracts Copyright © 2017 Information Inc., Bethesda, Maryland, USA


No entries found