News
Artificial Intelligence and Machine Learning

Battling AI Biases

Posted
Artificial intelligence can become as biased as its programmers.
Artificial intelligence systems aren't necessarily objective, or fair.

It is no secret that artificial intelligence (AI) is widely used to support hiring decisions, college admissions, bank loans, court sentencing, even how police allocate personnel and resources. What is not recognized as widely is the idea that these systems aren't necessarily objective or fair.

"Any time you have AI software making decisions, it reflects historical prejudices and inequities, including racial and gender bias," states Arvind Narayanan, assistant professor of computer science at Princeton University.

Not surprisingly, as artificial intelligence moves into the mainstream of business and society, the problems and challenges of using it are growing. What is more, in the not-too-distant future, it could also impact how robots, drones and automated surveillance systems make decisions.

Says Cynthia Dwork, Gordon McKay professor of computer science at Harvard University and a distinguished scientist for Microsoft Research, "It's not possible to avoid AI biases, because machine learning reflects human thinking."

Risky Business

It is tempting to think of AI as a completely objective way to make decisions; removing humans seemingly eliminates emotions and prejudices. However, as researchers scrutinize algorithms, they are finding that biases abound. Widely used word-embedding techniques—which map words to representative numbers—have fueled enormous gains in natural language processing, but at the same time, they have introduced a new problem: "Machine learning picks up the patterns, in this case the biases and prejudices, very efficiently," Narayanan says.

To be sure, word relationships evoke certain thinking—and thus, inherent biases. Consider: Narayanan, along with Joanna Bryson, a researcher at the University of Bath in the U.K., used a machine learning version of the Implicit Association Test, a widely used psychological technique that measures how fast human respondents pair words and concepts, to identify AI biases. Their technique, dubbed WEAT (Word Embedding Association Test), uses the open source GloVe algorithm from Stanford University to analyze the Web along with word2vec (from Google) to examine results from Google News. Researchers studied about 840-billion word vector representations and analyzed attribute words such as "man, male," and "female, woman" along with target words like "programmer, engineer, scientist" and "nurse, teacher, librarian."

In the end, words such as "female" and "woman" were found to be more closely associated with arts and humanities, while "man and "male" were linked more closely with math, science, and engineering. The same research found that people—and possibly machines—view European-American names more favorably than African-American names. This might translate into employers favoring those with European-American names on job applications.

"Anytime you train an algorithm based on human culture, you wind up with results that mimic it," says Bryson, who co-authored the academic paper.

Indeed, the news website Motherboard found that Google Cloud Natural Language API produces biased results, including negative sentiments for words such as "gay" and "homosexual." Over the last two years, Microsoft had to yank two different chatbots because they learned racist and bigoted thinking from humans and began to use negative language.

Meanwhile, a ProPublica investigation in 2016 found that AI software widely used for sentencing inmates showed starkly different, and potentially biased, results for predicting recidivism risks among blacks and whites.

My Fair Algorithm

Not surprisingly, creating unbiased algorithms is an onerous task. Machine learning systems, like people, pick up stereotypes and cultural biases from all sorts of places, including word vectors, geographic locations (including street names or districts), even images and sounds. Identifying AI biases is a mind-bending task; figuring out what to do about them extends the challenge further.

"There are competing but well-meaning definitions of 'fairness' in society or different groups within it," Dwork explains. "Unfortunately, these definitions and desires cannot all be satisfied simultaneously." Further complicating things, the definition of a bias often changes over time and it may be different among different groups or in different places. Introducing metrics or tweaking an algorithm to "correct" the bias may introduce new or different biases.

How can society cope with this problem? Transparency is a starting point, Dwork says. "There should be no hidden secrets or proprietary algorithms. Researchers should be able to play with algorithms and the underlying data sets." Another possible remedy is to add data, and constantly retrain algorithms with more recent data. Finally, software developers should use gender- and race-neutral language, which is different from avoiding references to loaded words, she says.

Dwork is exploring ways to move beyond broad statistical models. She hopes to develop valid "fairness" algorithms based on how "similar or dissimilar" people are within a particular classification category. By identifying a task-specific metric and then analyzing individuals, it may be possible to strip out many biases that occur in algorithms used for automated loan decisions, college admissions, résumé processing, and even advertising.

Nevertheless, in the end, the sobering reality is that as long as there are people, there will likely be some type of bias in AI. Says Narayanan: "It's a problem that isn't going to go away anytime soon, but AI bias also serves as an opportunity. It provides a window into societal biases, and if AI is used effectively, it could provide greater transparency into decision-making processes."

Samuel Greengard is an author and journalist based in West Linn, OR.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More