What does accountability actually mean, and how does it apply to AI ethics? We’ll also discuss what moral agency and responsibility mean and the difficulty of assigning blame.
icon

IV. The problem of individuating responsibilities

Accountability is often taken as a legal and ethical obligation on an individual or organisation to accept responsibility for the use of AI systems, and to disclose the results in a transparent manner. This formulation presupposes a “power-relation”. It individuates who is in control and who is to be blamed.

However, it has turned out to be notoriously difficult to set specific criteria on how, exactly, the responsibilities should be individuated, directed and defined. In many countries, there are on-going debates on these questions. International actors, such as European Union and G7 have addressed them as open challenges.

Why is it so difficult to set criteria on who is responsible?

  • Firstly, the quality of responsibilities differ. An actor is responsible for a specific action or omission, but the quality of responsibility is dependent on the stakeholder. Thus, although by choosing an action you may commit the responsibility, the quality of responsibility is dependent also on your properties. Intelligent technologies complicate this more.

    As we delegate more and more decision making tasks and functions to algorithms, we also shape decision making structures. AI is augmenting our intelligence by giving us more computational power, allowing better predictions and enhancing our sensory apparatus. Human and machines become cognitive hybrids. They cooperate cognitively (thinking) and epistemically (knowledge), both at the individual and collective level. This creates systemic properties.

    It is often thought that it is sufficient that a human stays “in-the-loop” or “on-the-loop” – meaning at some point of decision making, a human individual would be able to monitor or intervene in the artificial system. However, as algorithms enter into decision making, say, in public sector governance, the collective decision making can take a very complex and highly distributed form. To individuate and address the factors in a way that would guarantee that a human stays in/on-the-loop may be really difficult.

  • Secondly, technology can also take the persuasive form: it influences and controls people. A classical example is the beeping sound of seat belts. In many cars, if the seat belts are not fastened, it will cause a constant beeping sound. This can be taken as a form of controlling influence—in this case a kind of coercion. The driver can only stop the sound by fastening the belt. Contemporary algorithmic applications can have more and more such features; they propose, suggest and limit the options.

    But, an action is done voluntarily only if the action is done intentionally (the one acting is “in control”) and is free from controlling influences. Is the driver free from controlling influences, if the seat belt system forces him to react to the beeping sound? Or, are we free from control, if the algorithms decide whose pictures we´ll see in the dating sites, or what music we are about to listen to? What, exactly, is the difference between algorithmic suggestion, control, or manipulation?

    Naturally, persuasive technology should comply with the requirement of voluntariness to guarantee autonomy. Algorithms complicate this issue, since the voluntariness presupposes a sufficient understanding of the use of specific technology. But, what does it mean to “understand”, and what is the sufficient degree, really? What is the correct reading of “understandability” – “transparency”, “explainability” or “auditability”? How much, and what, exactly, a user should understand about the technology? When can one genuinely estimate, whether or not they want to use that particular technology? We’ll look at this topic in more detail in chapter 4.

You have reached the end of this section!

Continue to the next section: