Home → Magazine Archive → April 2021 (Vol. 64, No. 4) → The (Im)possibility of Fairness: Different Value Systems... → Abstract

The (Im)possibility of Fairness: Different Value Systems Require Different Mechanisms For Fair Decision Making

By Sorelle A. Friedler, Carlos Scheidegger, Suresh Venkatasubramanian

Communications of the ACM, Vol. 64 No. 4, Pages 136-143
10.1145/3433949

[article image]


Automated decision-making systems (often machine learning-based) now commonly determine criminal sentences, hiring choices, and loan applications. This widespread deployment is concerning, since these systems have the potential to discriminate against people based on their demographic characteristics. Current sentencing risk assessments are racially biased,4 and job advertisements discriminate on gender.8 These concerns have led to an explosive growth in fairness-aware machine learning, a field that aims to enable algorithmic systems that are fair by design.

Back to Top

Key Insights

ins01.gif

To design fair systems, we must agree precisely on what it means to be fair. One such definition is individual fairness:10 individuals who are similar (with respect to some task) should be treated similarly (with respect to that task). Simultaneously, a different definition states that demographic groups should, on the whole, receive similar decisions. This group fairness definition is inspired by civil rights law in the U.S.5,11 and U.K.21 Other definitions state that fair systems should err evenly across demographic groups.7,13,24 Many of these definitions have been incorporated into machine learning pipelines.1,6,11,16,25

0 Comments

No entries found