News
Artificial Intelligence and Machine Learning

Overcoming AI Bias with AI Fairness

Posted
Seeking ways to eliminate bias from algorithms.
A number of organizations are fielding solutions that could be used to make artificial intelligence algorithms more fair.

Bias in artificial intelligence (AI) has been addressed by ACM in its Statement on Algorithmic Transparency and Accountability, which says that "Computational models can be distorted as a result of biases contained in their input data and/or their algorithms," and that there is "growing evidence that some algorithms and analytics can be opaque, making it impossible to determine when their outputs may be biased"; as a result, "automated decision-making about individuals can result in harmful discrimination."

Often the cause of bias in AI can be found in the cognitive biases of humans, as represented in the datasets from which AIs are trained, since human prejudice can be incorporated into the labels humans attach to each datapoint. In addition to the commonplace human biases encapsulated in a dataset's labels, such as gender, race, and ideology, cognitive scientists have now identified over 180 biases that can be found in some AI systems, which are as varied as blind spots (the tendency to see oneself as less biased than others) to out-group homogeneity (the tendency to see members of your own group as being relatively more varied than members of other groups). Some biases also sometimes may be brought into results directly from AI algorithms.

As a result, there is a need for methodologies that provide greater trust, transparency, and control of AI's fairness—fairness being the positive goal attained by eliminating the negative effects of bias and other inequities.

"Fairness" also expresses the fact that there is more to be addressed by programmers than just cleansing their datasets and algorithms of bias, according to Kush Yarshney, research staff member and manager at IBM's Thomas J. Watson Research Yorktown Heights, NY, research facility. "Fair and trustworthy AIs do more than eliminate bias; they also explain their results, interpret them for users, and provide transparency in how results were arrived at," said Varshney.

Examples of AI bias abound, many of which are relatively benign, according to IBM Fellow Saška Mojsilović. "For instance, if you search for a picture of a researcher in many popular image libraries, you will get a lot of pictures of white males wearing white coats." In that scenario, unintentional bias comes from the labels that humans attach to each image, which are in turn learned by the search engine's AI and propagated to user queries.

In addition to such unintentional bias, there are many cases of intentional AI bias. For instance, Google reportedly intentionally biased its search engine results with its now defunct program codenamed Dragonfly, which was written to meet the Chinese government's demand for censorship.

Many more serious cases of unfairness come from AI algorithms that directly harm users—spanning AIs that choose from among medical treatments, job candidates, loan prospects, and criminal punishments. Microsoft's now-defunct Tay AI chatbot, for instance, learned racial bias from its users in less than 24 hours. Likewise, fake news stories that are picked up and repeated by many sources as if they were independent can fool news aggregating AIs into considering them legitimate news. Also, many predictive policing AIs such as Compas have learned to target black over white neighborhoods, and short-term loan AIs like Upstart have been questioned over bizarre biases such as text-message punctuation, even as they claim to be fair.

What is worse, there is no one-size-fits-all solution even to the clearest of biases. One might think that eliminating "skin color" from database categories would resolve AI racism, but sometimes skin color is essential for accurate results. For instance, to accurately diagnose and predict skin cancer, a major factor that needs to be considered is skin color, because the darker one's skin, the less their risk of skin cancer, and visa versa.

On the other hand, skin color is a demonstratively inaccurate measure of criminal recidivism.

The learning algorithms themselves can harbor sources of bias that are even more difficult to understand. For instance, undersampling just a few examples can produce bias due to the creation of too few classification categories, while oversampling vast data sets by learning algorithms can produce too many categories that unfairly pigeonhole people.

Machine learning algorithms based on multi-layer neural networks—utilizing a process called deep learning—are particularly difficult to make fair and transparent.

Many approaches are currently being pursued to make deep learning algorithms capable of explaining their decisions. One is to build explainability into the algorithm by requiring it to keep an audit trail of the learning categories it uses. Unfortunately, sometimes the intermediary layers in a deep network create categories that make no sense to humans.

Putting a supervisory "human in the loop" can sometimes help, by forbidding such unexplainable intermediate deep learning layers. Today autonomous cars require a human in the loop to be alert for an AI's mistake, in which case they would have to take over the driving task immediately. Likewise, AIs that suggest sentences for convicted criminals can be supervised by human judges to ensure a sentence is just and reasonable before that sentence is pronounced. Also, even though AIs make recommendations on parole candidates, human parole boards can still interview each individual prisoner to verify whether they should be released.

For its part, Google AI has created a tutorial on its general recommendations for AI that suggest avoiding the inclusion of gender, race, and other well-accepted criteria that can result in AI bias. Microsoft has created a machine learning algorithm for internal use which, it claims, avoids AI bias with intelligible, interpretable, and transparent machine learning. Facebook has likewise created a tool called Fairness Flow, which it claims mitigates AI bias by measuring how an algorithm interacts with specific groups of people.

Nevertheless, there are virtually no proprietary fairness program suites available for programmers seeking to create their own fair AIs (although Accenture is reported to be working on an AI Fairness Tool that fights racial and gender bias).

The open source community at GitHub, however, has nine program suites that help achieve AI fairness, including Aequitas, AI Fairness 360, Audit-AI, FairML, Fairness Comparison, Fairness Measures, FairTest, Themis, and Themis-ML. For the most part, these program suites test for bias, making it much easier for developers to know when they have created a fair AI. Three of these however—AI Fairness 360, Fairness Comparison, and Themis-ML—take the extra step of offering ready-to-use algorithms that developers can embed into their code to better ensure fairness. These open-source tools all aid in achieving fair results, but in some cases also can explain how they arrived at their results, in terms that are understandable by humans and which provide transparency to their inferences.

The most comprehensive program suite is IBM's AI Fairness 360, which offers 10 state-of-the-art bias mitigation algorithms for programmers to embed in their code—collected from the fairness research community. It also offers more than 70 fairness metrics for determining the seriousness of bias in existing or new code. Its embeddable algorithms test for bias, report where bias has slipped in, and explain how an algorithm or its input data went wrong. It will also cleanse input training datasets of bias before machine learning takes place. AI Fairness 360 also reports to developers how the biases were found, so future datasets and algorithms can be chosen for their fairness from the start.

A recently announced IBM product—IBM AI OpenScale—dovetails with a developer's finished AI code, monitoring its fairness in real time from a dashboard that measures the AI's success rate and uncovers hidden biases that may creep in throughout the lifetime of the application.

Fairness Comparison, on the other hand, does not aim to make a specific machine learning algorithm fair, but rather concentrates on comparing machine learning algorithms for their fairness. Based on A Comparative Study of Fairness-Enhancing Interventions in Machine Learning, a study by researchers at Haverford College and the universities of Arizona and Utah, Fairness Comparison facilitates benchmarking of the fairness of machine learning algorithms. Fairness-enhancing machine learning algorithms are also provided, allowing developers to add them to their code and then benchmark their results in order to choose which of their algorithms are most fair/least biased for a given application.

Themis-ML, provides a library of fairness-aware machine learning algorithms, which are described in A Fairness-Aware Machine Learning Interface for End-to-End Discrimination Discovery and Mitigation. The AIs most appropriate for its fairness algorithms deal with socially sensitive decision-making processes like hiring, loan approvals, and the granting of paroles. The example algorithm provides a classification context that considers a single binary class variable and a single binary target variable. The authors, however, claim to be working on expanding Themis-ML to cover multiple- class variables and target variables.

R. Colin Johnson is a Kyoto Prize Fellow who ​​has worked as a technology journalist ​for two decades.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More