What do the principles of beneficence (do good) and non-maleficence (do no harm) mean for AI, and how do they relate to the concept of the “common good?
icon

II. The common good – calculating consequences

Illustration. © Keksi Agency via City of Helsinki / Communications

Suppose you are the Chief Digital Officer in Helsinki City. You are asked to consider whether the city’s health care organisation should move from “reactive” healthcare to “preventive” healthcare. You read a report. It tells about novel, sophisticated machine learning systems that would help health authorities to forecast the possible health risks of citizens.

These methods produce predictions by combining and analyzing various sources of medical and health care systems. By analyzing a large number of criteria data, high-risk individuals could be identified and prioritized. These high-risk individuals could proactively be invited to a doctor’s appointment to get proper treatment.

The benefits

The report mentions many advantages. For example, sickness prevention has a lot of potential to improve the health and quality of life for citizens. Furthermore, it would allow better impact estimation and planning of basic healthcare services. Preventive healthcare also has the potential to significantly reduce social and healthcare costs. These savings, the report emphasizes, could be used for the common good.

The potential problems

The report, however, also includes some concerns. For example, the systems raise a number of legal and ethical issues regarding privacy, security, and the use of data. The report asks, for example, where is the border between acceptable prevention and non-acceptable intrusion? Does the city have a right to use private, sensitive medical data for identifying high-risk patients? How is consent to be given, and what will happen to people who don’t give their consent? What about those people who do not give consent because they are not able to?

The report also raises the fundamental question of the city’s role: if the city has information about a potential health risk and does not act upon the data, is the city guilty of negligence? Are citizens treated equally in the physical and digital worlds? If a person passes out in real life, we call an ambulance without having explicit permission to do so. In the digital world, privacy concerns may prevent us from contacting citizens.

What do you think about the above example? As a Chief Digital Officer, would you promote the use of preventive methods? If your answer is something like “yes, the city should seek an ethically and legally acceptable way to use those methods – there are so many advantages compared to the possible risks”, you were probably using a form of moral reasoning called "utilitarianism".

According to utilitarianists, morally right actions are the ones that produce the greatest balance of benefits over harm for everyone affected. Unlike other, more individualistic forms of consequentialism (such as egoism) or unevenly weighted consequentialism (such as prioritarianism), utilitarianism considers the interests of all humans equally. However, utilitarianists disagree on many specific questions, such as whether actions should be chosen based on their likely results (act utilitarianism), or whether agents should conform to rules that maximize utility (rule utilitarianism). There is also disagreement as to whether total (total utilitarianism), average (average utilitarianism) or minimum utility should be maximized.

For utilitarianists, utility – or benefit – is defined in terms of well-being or happiness. For instance, Jeremy Bentham, the father of utilitarianism, characterized utility as "that property… (that) tends to produce benefit, advantage, pleasure, good, or happiness…(or) to prevent the happening of mischief, pain, evil, or unhappiness to the party whose interest is considered."

Utilitarianism offers a relatively simple method for deciding, whether an action is morally right or not. To discover what we should do, we follow these steps:

  • Firstly, we identify the various actions that we could perform
  • Secondly, we estimate the benefits and harm that would result from each action
  • Thirdly, we choose the action that provides the greatest benefits after the costs have been taken into account
Past, future, presence

Utilitarianism provides many interesting ideas and concepts. For example, the principle of “diminishing marginal utility” is useful for many purposes. According to this principle, the utility of an item decreases as the supply of units increases (and vice versa).

For example, when you start to work out, at first you benefit greatly and your results get dramatically better. But the longer you continue working out, each individual training session has a smaller impact. If you work out too often, the utility diminishes and you’ll start to suffer from the symptoms of overtraining.

Another example is that if you eat one candy, you’ll get a lot of pleasure. But if you eat too much candy, you may gain weight and increase your risk to all kinds of sicknesses. This paradox of benefits should always be remembered when we evaluate the consequences of actions. What is the common good now may not be the common good in the future.

Diminishing marginal utility

The problems of utilitarianism

Utilitarianism is not a perfect account on moral decision making. It has been criticized on many grounds. For example, utilitarian calculation requires that we assign values to the benefits and harm resulting from our actions and compare them with the consequences that might result from other actions. But it’s often difficult, if not impossible, to measure and compare the values of all relevant benefits and costs in advance.

"Risk" is commonly used to mean a likelihood of a danger or a hazard that arises unpredictably, or in a more technical sense, the probability of some resulting degree of harm. In AI ethics, harm and risks are taken to arise from design, inappropriate application, or intentional misuse of technology. Typical examples are risks such as discrimination, violation of privacy, security issues, cyberwarfare, or malicious hacking. In practice, it is difficult to compare the risks and benefits for the following reasons:

Firstly, risks and benefits are influenced by value commitments, subjective and diverse preferences, practical circumstances, and personal and cultural factors.

Secondly, harm and benefits are not static. The marginal utility of an item diminishes in a way that can be difficult to foresee. Moreover, a specific harm or a specific benefit may have different utility value in different circumstances. For example, whether or not the faster car will be more beneficial depends on the intended use of it – if it is intended to be a school bus, then we should prioritize safety, but if it is used as a racing car, then the answer may be different.

Thirdly, real-world situations are typically so complex that it is difficult to foresee or compare all the risks and benefits in advance. For example, let’s analyze the possible consequences of military robotics. Although contemporary military robots are largely remotely operated or semi-autonomous, over time they are likely to become fully autonomous. According to some estimates, robots reduce civilian and military casualties. But according to other estimates, they do not reduce the risk to civilians. Statistically, in the first decades of war in the 21st century, robotic weaponry has been involved in numerous killings of both soldiers and noncombatants. The possibility to use various techniques – such as adversarial patches (which interrupt a machine’s ability to properly classify images) – to fool and manipulate automated weapons complicates the situation by increasing the specific risk of causing harm to civilians. The overall level of risks is also dependent on the ease in which wars might be declared if robots are taking most of the physical risk.

Fourthly, utilitarianism fails to take into account other moral aspects. It is easy to imagine situations where developed technology would produce great benefits for societies, but its use would still raise important ethical questions. For example, let’s think about the case of a preventive healthcare system. The system may indeed be beneficial for many, but it still forces us to ask whether fundamental human rights, such as privacy, matter. Or what happens to the citizen’s right not to know about possible health problems? (Many of us would want to know if we are in a high-risk group, but what if someone does not want to know? Can a city force that knowledge on them?) Or, how can we make it sure that everyone has equal access to the possible benefits of a preventative system?

:
Loading...
:
Loading...

Login to view the exercise

If you have done other MOOC.fi courses offered by University of Helsinki (such as Elements of AI), you can use your existing account to log in.

You have reached the end of this section!

Continue to the next section: