Ben Green is a postdoctoral scholar at the University of Michigan and an assistant professor at the Gerald R. Ford School of Public Policy.
Awareness of bias hasn't stopped institutions from deploying algorithms to make life-altering decisions about, say, people's prison sentences or their health care coverage. But the fear of runaway AI has led to a spate of laws and policy guidance requiring or recommending that these systems have some sort of human oversight, so machines aren't making the final call all on their own.
The problem is: These laws almost never stop to ask whether human beings are actually up to the job, says Ben Green in an interview.
View Full Article