Why is transparency in AI important and what major issues are affected by transparency – and what are some of the risks associated with transparency in AI systems?
icon

I. Transparency in AI

The principle of transparency

Imagine a facial recognition system called MYFACE. MYFACE is used for security purposes in the airport. Usually it works perfectly, but one day it starts to miscategorize individuals as potentially dangerous. As a result, several innocent people are arrested. Would it be important to know why the system made all these mistakes? Should we be able to explain why it made mistakes? And why would this matter?

myface

Some contemporary machine learning systems are so-called “black box” systems, meaning we can’t really see how they work. This “opacity”, or lack of visibility, can be a problem if we use these systems to make decisions that have an effect on individuals.

Individuals have a right to know how critical decisions – such as who gets accepted for a loan application, who gets paroled, and who gets hired – are made. This has led many to call for “more transparent AI”.

Transparency in AI

Transparency is a property of a system that makes it possible to get certain information regarding a system’s inner workings. But what information that is, and whether it is ethically relevant, depends largely on the ethical issue we are trying to answer. Transparency itself is ethically neutral and is not an ethical concept. Instead, it constitutes an ideal. Transparency is something that can manifest in many different ways, and something that can present a solution for underlying ethical questions. In this sense, transparency is relevant at least to the three following issues:

1) The justification of decisions. Good governance in public or private sectors involves non-arbitrariness of decisions. This is applied to any kind of decision-making that has an ethically or legally relevant effect on individuals. Non-arbitrariness means access to justifications about “why was this decision reached, and on what grounds?” Furthermore, especially in the case of public governance, the capacity to contest and appeal are crucial. This represents a demand to right wrongs.

2) A right to know. According to human rights, people are entitled to have explanations on how decisions were made so that they can maintain genuine agency, freedom and privacy (for more on human rights, see chapter 5). Freedom entails the right to get answers to questions such as “How am I being tracked? What kind of inferences are being made about me? And how, exactly, have the inferences about me been made?”

3) A moral obligation to understand the consequences of our actions. As a community, we also have a responsibility for managing risks. There is a moral obligation, up to some reasonable level, to understand and predict the consequences of the kinds of technologies one brings into the world. That is, saying “we can’t understand now what it will do” is not a valid argument for unleashing a system that causes harm. Instead, it is our moral duty to explore the possible risks.

These three points can all be summarized as calls for sufficient information. Do we know whether and to what extent this algorithmic decision is justified? Do I know how inferences about me are made? To what extent I am responsible for the actions of the system, and how much I should know about the inner workings of the system to be able to take that responsibility?

You have reached the end of this section!

Continue to the next section: