Home → News → How to Stop Artificial Intelligence Being Biased → Full Text

How to Stop Artificial Intelligence Being Biased

By New Scientist

July 10, 2018

[article image]


Niki Kilbertus and colleagues at the Max Planck Institute for Intelligent Systems in Germany have developed a new method to avoid embedding bias into machine-learning algorithms.

Their technique involves incorporating sensitive data in the training process while including an independent regulator and applying encryption mathematics.

When training the artificial intelligence (AI), an organization can use as much non-sensitive data as required, but both the organization and the regulator only receive sensitive data in an encrypted form. This is sufficient for the regulator to check whether the AI is making biased decisions that are shaped by anything inferred from non-sensitive data.

Once assured, the regulator can assign a fairness certificate to the organization, which has the benefit of making the regulator's knowledge of any of the AI's inner workings unnecessary, maintaining the confidentiality of trade secrets.

From New Scientist
View Full Article - May Require Paid Subscription

 

Abstracts Copyright © 2018 Information Inc., Bethesda, Maryland, USA

0 Comments

No entries found