Home → News → What If We Could Just Ask AI to be Less Biased? → Full Text

What If We Could Just Ask AI to be Less Biased?

By MIT Technology Review

March 28, 2023

[article image]


Think of a teacher. Close your eyes. What does that person look like? If you ask Stable Diffusion or DALL-E 2, two of the most popular AI image generators, it's a white man with glasses. 

Last week, I published a story about new tools developed by researchers at AI startup Hugging Face and the University of Leipzig that let people see for themselves what kinds of inherent biases AI models have about different genders and ethnicities. 

Although I've written a lot about how our biases are reflected in AI models, it still felt jarring to see exactly how pale, male, and stale the humans of AI are. That was particularly true for DALL-E 2, which generates white men 97% of the time when given prompts like "CEO" or "director."

And the bias problem runs even deeper than you might think into the broader world created by AI. These models are built by American companies and trained on North American data, and thus when they're asked to generate even mundane everyday items, from doors to houses, they create objects that look American, Federico Bianchi, a researcher at Stanford University, tells me.

From MIT Technology Review
View Full Article

 

0 Comments

No entries found