The General Data Protection Regulation (GDPR) scheduled for European Union-wide implementation in 2018 could stipulate companies clarify their decision-making algorithms to avoid unlawful discrimination.
This "right to explanation" poses challenges to businesses and opportunities to machine learning scientists, according to a recent report from Oxford Internet Institute researcher Bryce Goodman and University of Oxford Department of Statistics post-doctoral researcher Seth Flaxman. They contend compliance with such rules could be complicated, noting for example that excluding sensitive data about race or religion does not necessarily ensure algorithms will return non-biased results. They say this is because other non-sensitive data points, such as geographic area of residence, may correlate with sensitive information.
Moreover, Goodman and Flaxman note many large datasets are derived from multiple smaller datasets, making it extremely difficult for organizations to certify the integrity, accuracy, and neutrality of their data.
"The GDPR thus presents us with a dilemma with two horns: under one interpretation the non-discrimination requirement is ineffective, under the other it is infeasible," the researchers caution.
Attorney Lokke Moerel acknowledges the existence of dynamic, self-learning algorithms complicates determining how they make decisions at any point in time, let alone conveying this meaningfully to an individual. She suggests companies give individuals impacted by such decisions more control over the implications of data usage, in order to avoid them questioning their logic.
View Full Article
Abstracts Copyright © 2016 Information Inc., Bethesda, Maryland, USA