News
Artificial Intelligence and Machine Learning

Keeping Algorithms Fair

Posted
A "fairness button" on a computer keyboard.
Says the University of Chicago's Sendhil Mullainathan, "Every time an algorithm turns out to give biased outcomes, we of course should be worried about the algorithm, but we should even be more worried about the current state of affairs in human decision-

Over the last few years, we have been confronted with many cases of algorithms making unfair decisions. In 2016, ProPublica determined that the COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) algorithm that was used by judges, probation officers, and parole officers to assess potential recidivism risk was biased against black people. In 2018, an Amazon hiring algorithm was shown to discriminate against women.

Despite such recent bad publicity about algorithmic decision-making, Sendhil Mullainathan, Roman Family University Professor of Computation and Behavioral Science at the University of Chicago's Booth School of Business, thinks that we can build fair algorithms. Mullainathan uses machine learning to understand complex problems in human behavior, social policy, and especially medicine. He started thinking about fair algorithms when he found, in a study from 2003, a large discrepancy between callback rates of sent resumes bearing white- and black-sounding surnames.

Together with colleagues, Mullainathan has published a number of research papers on building fair algorithms. With some adjustments, the COMPAS algorithm and the Amazon hiring algorithm can perform more fairly than human decision-makers, he says.

"An algorithm is not just a tool, it is also something like a Geiger counter," says Mullainathan. "If you walk around with a Geiger counter and the counter goes off in some place, than you shouldn't blame the Geiger counter; the counter is telling you that there is radiation. In a similar way, the Amazon hiring algorithm was actually telling us that the hiring staff at Amazon was biased against women. After all, the algorithm was trained on the data of human decision-makers. Every time an algorithm turns out to give biased outcomes, we of course should be worried about the algorithm, but we should even be more worried about the current state of affairs in human decision-making."

Mullainathan says the research he has done with his colleagues suggests a number of ways to build fairer algorithms. "First, we should focus more on the meaning of the labels that we use for the data. In most social science problems, it is not clear at all what the labels mean. If you want to select the most effective worker from a list of candidates, what do you really mean by 'most effective'? There is often a big gap between the conceptual way an algorithm is described and its literal implementation."

Working with his colleagues, Mullainathan investigated a commercial algorithm that was trying to predict which patients had the most complex healthcare needs. The algorithm used data on the medical costs spent on a patient as a proxy for the medical need of that patient. This resulted in the algorithm assigning twice as many white people as black people to a health intervention. That was an odd result, as in reality there were more black people than white people with complex healthcare needs. However, considering that black people in the U.S. don't visit the doctor as often as white people, the data was biased.

Mullainathan describes this as "an illustration of the folly of predicting A, medical costs in this case, while hoping for B, in this case the health of a patient. We solved this by using as a label not the costs, but some physiological outcomes. When we did that, the algorithm became remarkably unbiased and assigned twice as many blacks as whites. So you have to make an effort to use the correct labels."

A second way to build fairer algorithms sounds counterintuitive: one should include potentially discriminating variables such as gender, race, or age for fairness reasons, and not exclude them, as we would demand from human decision-makers. "A body of research demonstrates that an algorithm can only undo the human bias if we explicitly tell the algorithm on the basis of which concepts, like gender or age, humans discriminate in their decisions. Algorithms are brilliant because they take everything very literally. They take advantage of labels like gender or age to discover that human data are biased, and then they undo the bias."

A third way to build fairer algorithms, Mullainathan says, is to introduce "fairness knobs" in the algorithms. "Let's take the Amazon hiring algorithm again. If we consider it fair that men and women are equally represented in the selected candidates, we can instruct the algorithm to separate its decisions about males and females; then it selects, for example, the top 5% of the male candidates and the top 5% of the female candidates. In this way, you get rid of the human gender bias while still using the power of the algorithm in automatically ranking candidates. By building in such equality knobs, we can adjust various thresholds in accordance with whatever equalization we want. You can't do this with human decision-making."

These methods of de-biasing algorithms might lead to some discomfort, Mullainathan says. Many managers were hoping for objective decisions by outsourcing them to computer scientists, when it turned out that all kinds of thorny issues in the social sciences creep into building fair algorithms. Mullainathan believes such realization is a good thing after all, as "the next big wave that algorithmic decisions will cause, will be that they force us to make our human values and priorities explicit.

"We humans cannot hide our choices in our minds any longer. I think society as a whole will be much better off if we have a conversation about our values. If Amazon discards its hiring algorithm and turns back to only human decision-making, then it goes back in transparancy. It would be better to learn from its mistakes and build a fairer hiring algorithm."

Bennie Mols is a science and technology writer based in Amsterdam, the Netherlands.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More