II. What is AI ethics?
Before looking at AI ethics, we need to set out what ethics means in the first place.
Ethics seeks to answer questions like “what is good or bad”, “what is right or what is wrong”, or “what is justice, well-being or equality”. As a discipline, ethics involves systematizing, defending, and recommending concepts of right and wrong conduct by using conceptual analysis, thought experiments, and argumentation. (If you want to know more about philosophical reasoning, see this video by Crash Course Philosophy.)
AI ethics is a subfield of applied ethics. Nowadays, AI ethics is considered part of the ethics of technology specific to robots and other artificially intelligent entities. It concerns the questions of how developers, manufacturers, authorities and operators should behave in order to minimize the ethical risks that can arise from AI in society, either from design, inappropriate application, or intentional misuse of the technology.
These concerns can be divided into three time frames as follows:
- immediate, here-and-now questions about, for instance, security, privacy or transparency in AI systems
- medium-term concerns about, for instance, the impact of AI on the military use, medical care, or justice and educational systems
- longer-term concerns about the fundamental ethical goals of developing and implementing AI in society
These days, AI ethics is a more general field, and closer to engineering ethics: we don’t have to assume the machine is an ethical agent to analyze its ethics. Research in the field of AI ethics ranges from reflections on how ethical or moral principles can be implemented in autonomous machines to the empirical analysis on how trolley problems are solved, the systematic analysis of ethical principles such as fairness, and the critical evaluation of ethical frameworks.