News
Artificial Intelligence and Machine Learning

Making AI Systems Fairer Will Require Time, Guidelines

Posted
Said Lutge, "Certainly there are questions in the systems Facebook and others use that we will be looking into."
Christoph Luetge, director of the Institute for Ethics in Artificial Intelligence at Germany's Technical University of Munich, said there is "a chance that these AI systems might be fairer eventually, but they will need guidelines."

In January, the Institute for Ethics in Artificial Intelligence was established at Germany's Technical University of Munich (TUM), with initial funding from a five-year, $7.5-million grant from Facebook. The Institute has issued its first call for proposals, and an advisory board was recently appointed.

The Institute's director, Christoph Lütge, holds the Peter Löscher Chair in Business Ethics at TUM. Lütge recently spoke about ethics in artificial intelligence (AI) generally, and the new Institute specifically.

Can you give an example of the type of ethical question in AI that the Center might be dealing with?

AI systems are being widely used in courts in the U.S. to determine probation.  These systems assess the probability of someone committing another crime. There is a famous case from a couple of years ago, where ProPublica showed that such systems have a clear bias against African-Americans. You might also have bias without those algorithms; a judge is not necessarily fairer.

Studies show that after breakfast, judges tend to be more favorable toward giving probation, and less favorable just before lunch.  There is a chance that these AI systems might be fairer eventually, but they will need guidelines.
 

Facebook is using AI to try to identify content that it does not want, such as terrorist content and extremely violent content.  What kinds of ethical questions come up here?

First, I should say this is an independent research institute. We are not working for Facebook. The money comes as a research gift to us, with no obligations whatsoever, not even reporting obligations.

But certainly there are questions in the systems Facebook and others use that we will be looking into. What you are talking about is automatic content filtering.  This is a very interesting issue.  For example, what is hate speech?  What constitutes hate speech in what country?  In Muslim countries, in Hindu countries, in the U.S. and in Europe, there are different ideas of what hate speech is. There is not even a unified European view. Can people appeal a decision when hate speech is banned?  Who makes that decision?  Some would like a political body to make that decision. That is a discussion we need to conduct.
 

You note that the Institute is independent of Facebook, but Facebook's news release announcing the Institute's funding said that Facebook will "offer an industry perspective on academic research proposals, rendering the latter more actionable and impactful." That sounds like Facebook expects to comment on the proposals.

Not directly on the proposals and the selection of proposals. We have a procedure for collecting and then assessing proposals, and an independent advisory board of experts from academia, industry, and civil society; no one from Facebook.   

But eventually, when we have research projects, we will be talking to many actors, including from business.  We want to conduct the dialogue, with Facebook but also with other companies, in order to get their perspectives.  If we want to improve AI systems, then we need to talk to the people who are implementing them.


Late last year, the Montreal Declaration for Responsible Development of AI appeared, and in April, the European Commission put out Ethics Guidelines for Trustworthy AI.  Will the Center be putting out guidelines of this sort?

I was actually involved in the predecessor to the European Commission guidelines, the AI4People group, which had the same five ethical principles that the European high-level expert group then adopted.

We have seen now a number of these lists of principles on AI.  It's time to move beyond this and ask, what does it mean to implement a principle like "Do no harm"?  So what the Institute is looking to do is more concrete; maybe not always immediately actionable, but at least more fine-grained guidelines dealing with certain systems, with certain software, with certain robots also.

One example is explicability.  Just making an algorithm or a system transparent doesn't tell you much. With traditional code, you can see what it does, but with AI systems, the code is very simple, so even if you make it transparent, you don't know what it does. Instead of transparency you might have explicability, or explainability, meaning that you have to be able to explain, at least in principle or in most cases, how the algorithm came to its conclusion.  What would that look like?  Maybe you would be able to click on a box titled "Explain to me," and you would see which input had the most influence on the conclusion.  This might not help very much at first.  Just as automatic translation took a while to get where it is now, this will also get better. It is important to start a discussion about the explicability criterion.


When one does not come from a technical background but is trying to understand how these AI systems are interacting with the world, is that lack of technical background a hindrance to working on the ethical issues?  How do you deal with that in an institute like this one?

I should say that my first degree is in business informatics, and then I turned to philosophy and ethics, so I have both those backgrounds.

In the Institute we will have interdisciplinary cooperation,  I did not want to have an institute working only on a social science basis, so we are creating tandem projects that always have one researcher from the technical side, from say engineering or computer science, and one from the other side: ethics, law, government, social science in general.  Only one person of these two has to be from TUM.  The TUM has lots of people in the technical disciplines who work on these issues.


Will the research teams physically be in Munich?

Yes, but if there is cooperation with someone from an external institution, that person would not have to be in Munich all the time. We will have flexible modes.


How many people overall do you expect to be involved?

With the funding that we now have, I expect between 10 and 15 people as a first step.  We are also looking for other funding partners.  This is not meant to be an exclusive Facebook institute. Facebook is encouraging us to find other partners.


There is a question of whether Facebook is a media company or a platform. If it is a platform, it just puts things out and is neutral and does not edit. If it is a media company, it has very different responsibilities, and there are laws that govern that.  So, what is Facebook?

I think it is a difficult question. Facebook and others are now looking more at their content, and I think this is appropriate. We need to look at rules for content, in order to meet expectations that the public has.  That is the responsibility of platforms.  But I would not say Facebook is a media company in the traditional sense.  Traditional media companies produce their own content.  You could not upload content to CNN, could you?

These are difficult questions that have not appeared in the past. It's not possible to say, "Here is the list of 10 very high, abstract principles," and then we have solved all the problems.  It won't work that way.  We need to look at these issues in more detail, do something, and then refine it.

Allyn Jackson is a journalist specializing in science and mathematics topics, who is based in Munich, Germany.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More