III. Moving forward with ethics
Moving beyond ethical guidelines, how should AI ethics manifest in the future? What kind of conversations around the ethics of AI should we have, and what kind of activities and ways of doing ethics should be taken into practice? This is a difficult question to answer, but some hints can be found from looking at what is left outside the scope of the AI ethics guidelines we have been discussing so far.
Fairness, accountability, and transparency have come to dominate the AI ethics conversation. They also comprise the name of the largest scientific conference around ethical AI, FAccT. However, as AuroraAI illustrates, questions of “help, welfare, and government-citizen relations” are among the fundamental moral questions directed at any government that aims to deploy AI in its public sector services.
But, these values do not often appear in ethical guidelines. What, then, would it look like to take a perspective of care to AI ethics? According to the ethics of care, this means taking into consideration the complex dependencies and interdependencies between individuals, how the consequences of actions propagate and affect the most vulnerable, and how nature and ecology become entwined in these processes.
Moving from new perspectives to new practices, the role of citizens and civil society along with companies in the creation of more just AI should be examined. Creating ethical AI in practice means moving on from publishing good intentions to the many ways that societal actors can participate in effecting different futures.
The Director of the Ada Lovelace institute Carly Kind calls this the third wave of AI ethics, and suggests we are moving into a new form of societal engagement:
-
“Third-wave ethical AI has seen a Dutch Court shut down an algorithmic fraud detection system, students in the UK take to the streets to protest against algorithmically-decided exam results, and US companies voluntarily restrict their sales of facial recognition technology. It is taking us beyond the principled and the technical, to practical mechanisms for rectifying power imbalances and achieving individual and societal justice.”
-Carly Kind
Conclusion: now it’s your turn
Ethical questions regarding Al systems pertain to all stages of the Al system lifecycle, understood here to range from research, design, and development to deployment and use – including maintenance, operation, trade, financing, monitoring and evaluation, validation, end-of-use, disassembly, and termination.
In addition, Al actors can be defined as any actors involved in at least one stage of the Al lifecycle, and can refer to both natural and legal persons, such as researchers, programmers, engineers, data scientists, end users, large technology companies, small and medium enterprises, start-ups, universities, and public entities, among others.
AI is developing fast. While nobody can say for certain how it will impact our lives, we still can make a difference. As is the case with most emerging technologies, there are real risks. Still, if artificial intelligence is developed and deployed in ethically sustainable ways, AI may bring many positive consequences — not only for individuals, or societies, but for the planet as a whole. The direction of development, however, depends only on us.
“Choice, not chance, determines your destiny.”
-Aristotle
You have reached the end of this section!