What do the principles of beneficence (do good) and non-maleficence (do no harm) mean for AI, and how do they relate to the concept of the “common good?

III. Common good and well-being

Despite the problems outlined in the previous section, the principles of utilitarianism may help us to consider the immediate and the less immediate consequences of our actions. One should remember that in real life, defining “common good” requires a diversity of viewpoints.

The common good approach requires that everyone should have access to the benefits of AI. This highlights the importance of ensuring that potential benefits of AI do not accumulate unequally, and are made accessible to as many people as possible. And, AI should be aligned with values, goals, and norms, respecting cultural and individual diversity to a sufficient degree.

The common good is not a singular, but a plural. Identifying social and moral norms of the specific community in which an AI will be deployed is, thus, obligatory. It is the only way to bring AI’s potentially significant and diverse benefits to society and facilitate, among other things, greater well-being and welfare for all.

You have reached the end of this section!

Continue to the next section: