What does accountability actually mean, and how does it apply to AI ethics? We’ll also discuss what moral agency and responsibility mean and the difficulty of assigning blame.

III. Who should be blamed – and for what?

Agent action

In ethics, accountability is closely related to the concept of “moral agency”. A moral agent is “an agent who is capable of acting with reference to right and wrong." Importantly, only moral agents are morally responsible for their actions.

Actions and omissions

Philosophically, a moral agent is primarily responsible for their own actions (“acts”). Sometimes agents are also responsible for not-doings, “omissions”. So, if I kill someone, I am responsible for that act. If I just let someone die, I am responsible for not-helping (omission), even if I did not actively kill someone.

Omissions and actions are not morally equal. It is morally less bad to omit a thing than to perform an act: It is worse to actively kill someone than to let them die. But this doesn’t make omissions morally right. However, we cannot be responsible for all of the things we do not do. Instead, we are responsible for only those things which we’ve deliberately and knowingly chosen to do or omit.


Philosophically, moral responsibility requires 1) moral autonomy and 2) the ability to evaluate the consequences of actions. “Moral autonomy” means the agent’s capacity to impose the moral code on oneself in a self-governed way. Further, autonomy requires:

  • The capacity to rule oneself without manipulation by others and the ability to act, without external or internal constraints
  • The authenticity of the desires (values, emotions, etc) that move someone to act
  • Sufficient cognitive skills – meaning an agent must be able to evaluate, to predict and to compare consequences of their actions and, also, to estimate motives that drive action by using ethically meaningful criteria

You have reached the end of this section!

Continue to the next section: