What do the principles of beneficence (do good) and non-maleficence (do no harm) mean for AI, and how do they relate to the concept of the “common good?
icon

I. What should we do?

The principle of beneficence

“AI inevitably becomes entangled in the ethical and political dimensions of vocations and practices in which it is embedded. AI Ethics is effectively a microcosm of the political and ethical challenges faced in society.”

-Brent Mittelstadt

The principle of beneficence says “do good”, while the principle of non-maleficence states “do no harm”. Although these two principles may look similar, they represent distinct principles. Beneficence encourages the creation of beneficial AI (“AI should be developed for the common good and the benefit of humanity”), while non-maleficence concerns the negative consequences and risks of AI.

Generally, AI ethics have been primarily concerned with the principle of non-maleficence. Discussion has focused mostly on questions of how developers, manufacturers, authorities, or other stakeholders should minimize the ethical risks – discrimination, privacy protection, physical and social harms – that can arise from AI applications. Often, these discussions are stated in terms of intentional misuse, malicious hacking, technical measures, or risk-management strategies.

As a result, deep and difficult ethical problems are oversimplified and unanswered. One of the questions is the problem of the “common good”. What, exactly, does that mean? In this chapter, we’ll take a look at one classical philosophical answer.

You have reached the end of this section!

Continue to the next section: