What are some of the current challenges for AI ethics, what role do AI guidelines play in shaping the discussion, and how might things develop in the future?
icon

II. Ethics as doing

The subject of ethical considerations relating to technology is not a new question. Rather, it has been asked in a very similar form throughout many different iterations of technological development. As an illustration of the historical continuities around these issues, the sociologist Langdon Winner wrote in his 1990 essay “Engineering Ethics and Political Imagination” that a difficulty lies in ethical discussions centring on highly hypothetical and limited “troubling incidents” – without calling into question the broader responsibilities of the whole engineering industry. This issue is similar to those facing the AI ethics project now, 30 years later.

While the troubles may not be new, the contemporary AI ethics project has much more public interest and wider stakeholder participation than engineering ethics has had before. Many more are learning about it, many are hoping to implement AI ethics in their work in some way, and many companies, communities, states and individuals have stakes in the outcomes. With this new wave of interest, publishing ethical guidelines has become the typical way of doing ethics.

Ethics guidelines as texts that do

To conceptualize AI ethics guidelines as performative texts invites us to look past the ethical contents of the guidelines, and into what the text does as a communicative act. To do so we have to consider who the text is written by and who the intended audience is. Where it is published and what other authors and actors it references also play into what kind of ends are achieved by the text. Sociologists have looked into the performativity of ethical guidelines and found that they serve to create and manage expectations among different stakeholders in the AI economy.

Let’s look at three ways that guidelines perform these functions:

1) Guidelines as calls for deregulation

Some authors have argued that ethical language deployed by companies is a communicative strategy that provides support for self-regulation (Wagner 2018). The implicit narrative in publishing ethics guidelines is that of a strong moral reflection at the core of business practices, which makes industry regulation unnecessary. Why should our business be restricted, since we are already so ethical? While this communicative strategy is more strongly motivated for private companies, national ethics guidelines also reflect a tension between ethical consideration, regulation, and market-driven innovation. These conceptualize ethics as a value, to be put into healthy balance with competing values such as economic growth.

One of the core tensions in the AI economy is the balancing of the freedom to pursue projects and innovate versus the need to prevent and redress harm through regulatory practices. How the tension is resolved will undoubtedly have meaningful consequences for companies working in the AI scene, so it is far from an abstract issue. Thus, the concern for the regulation/deregulation debate is present not only in the practices of communication around AI ethics, but also in the contents of scientific research around fair AI. Computer scientist Michael Kearns, for example, presents technical advances in algorithmic fairness and privacy as answers to the regulation/deregulation debate.

2) Guidelines as assurances

Others have argued that ethics guidelines work as assurance to investors and the public (Kerr 2020). That is, in the age of social media, news of businesses’ moral misgivings spread fast and can cause quick shifts in a company’s public image. Publishing ethics guidelines makes assurances that the organization has the competence for producing ethical language, and the capacity to take part in public moral discussions to soothe public concern.

Thus AI ethics guidelines work to deflect critique away from companies; from both investors and the general public. That is, if the company is seen as being able to manage and anticipate the ethical critique produced by journalists, regulators and civil society, the company will also be seen as a stable investment, with the competence to navigate public discourses that may otherwise be harmful for its outlook.

3) Guidelines as expertise

With the AI boom well underway, the need for new kinds of expertise arises, and competition around ownership of the AI issue increases. That is, the negotiations around AI regulation, the creation of AI-driven projects of governmental redesign, the implementation of AI in new fields, and the public discourse around AI ethics in the news all demand expertise in AI and especially the intersection of AI and society.

To be seen as an expert yields certain forms of power. Experts create judgements, provide legitimacy to viewpoints, define situations, and set priorities. Thus, being seen as an AI ethics expert gives some say in what the future of society will look like.

Expertise is only effective if it is publicly recognized, and in the race to become a leading AI organization, publications are a way to secure visibility. According to Greene et al., “ethics and ethical codes designate and defend social status and expertise”. That is, taking part in the AI ethics discussion by publishing a set of ethical guidelines is a way to demonstrate expertise, increasing the organization’s chances of being invited to a seat at the table in regards to future AI issues.

Even if ethical guidelines are ineffectual in effecting change in AI practice, they can nonetheless be useful as communicative strategies and perform other functions for the actors involved. They can help us to figure out what kind of principles and practices should be taken up in the future of ethical AI. And they force us to ask how diverse organizations and individuals can take part in building a more just AI future in their own ways.

You have reached the end of this section!

Continue to the next section: