What are human rights, and how do they tie into the current ethical guidelines and principles of AI? We’ll also look more closely at three rights of particular importance to AI: the right to privacy, security, and inclusion.
icon

III. Examples of human rights: privacy, security, and inclusion

Privacy

Privacy concerns are raised, for example, by digital records which contain information that can be used to infer sensitive attributes (age, gender or sexual orientation), preferences, or religious and political views. Biometric data also raises privacy concerns, as it can reveal details of physical and mental health. Often the real worry is not the data itself, but the way the data can be used to manipulate, affect, or harm a person.

Ethically, privacy is related to personal autonomy and integrity. Following the principles set out by John Locke, a right to control our own personal lives has been seen as central to our autonomy. If that right is taken away, it violates something fundamental about our psychological and moral integrity.

london zoomed image

GDPR

The General Data Protection Regulation (GDPR) is a legal framework. It sets guidelines for the collection and processing of personal data from individuals who live in the European Union.

The GDPR's aim is to give individuals control over their personal data. Any information that relates to an individual who can be directly or indirectly identified is “personal data”. This includes names, social security numbers and email addresses. Location information, biometric data, ethnicity, gender, web cookies, and political or religious beliefs can also be personal data. Pseudonymous data (data that does not directly identify an individual but can be connected to them) can also fall under the definition if it’s easy to individuate someone from it.

The data subject must give specific, unambiguous consent to process the data. Consents must be “freely given, specific, informed and unambiguous.” Data subjects can withdraw previously given consent whenever they want. Children under 13 can only give consent with permission from their parent.

The GDPR recognizes several privacy rights for data subjects. Their aim is to give individuals more control over the data. Some of these rights are:

  • The right to be informed (a person must be told about the use of their personal data)
  • The right of access (it should be explained how someone's personal data is used)
  • The right to rectification (a person has the right to be forgotten and the data deleted)
  • The right to restrict processing (a person can deny the use of their personal data)

If you process data, then according to GDPR you have to do so according to protection and accountability principles. You must consider these data protection principles in the design of any new product or activity. The data protection principles are:

  • Lawfulness, fairness and transparency: Processing must be lawful, fair, and transparent to the data subject
  • Purpose limitation: You must process data for the legitimate purposes specified explicitly to the data subject when you collected it
  • Data minimization: You should collect and process only as much data as absolutely necessary for the purposes specified
  • Accuracy: You must keep personal data accurate and up to date
  • Storage limitation: You may only store personally identifying data for as long as necessary for the specified purpose
  • Integrity and confidentiality: Processing must be done in such a way as to ensure appropriate security, integrity, and confidentiality (for example by using encryption)
  • Accountability: The data controller is responsible for being able to demonstrate GDPR compliance with all of these principles

According to GDPR, if you process data you’re also required to handle data securely by implementing “appropriate technical and organizational measures.”

How to protect privacy – data anonymisation methods

The GDPR permits organisations to collect anonymized data without consent, use it for any purpose, and store it for an indefinite time – as long as organisations remove all identifiers from the data. There are several techniques for data-anonymisation, including:

  • Generalization is a method that deliberately removes some of the data to make it less identifiable. Data can be modified into a set of ranges or a broad area with appropriate boundaries. You can remove, for example, the street address while including the information about the town name. In this way, you can eliminate some of the identifiers while retaining a degree of data accuracy.
  • Pseudonymization is a data management and de-identification method that replaces private identifiers – names, ID-codes – with fake identifiers or pseudonyms, for example replacing the identifier “Santeri” with “Saara”. Pseudonymization preserves statistical accuracy and data integrity. The modified data can be used while still protecting data privacy.
  • Synthetic data is a method for using created artificial datasets instead of altering the original dataset. The process involves creating statistical models based on patterns found in the original dataset. You can use standard deviations, medians, linear regression or other statistical techniques to generate the synthetic data.

Data-anonymization can be challenging. There are also methods for “de-anonymization”. De-anonymization methods attempt to re-identify encrypted or obscured information. De-anonymization, also referred to as data re-identification, can, for example, cross-reference anonymized information with other available data in order to identify a person, group, or transaction.

Safety and security

The right to safety is a norm protecting individuals from physical, social and emotional harms, including accidents and malfunctions. Security means safety from malicious and intentional threats.

As a right, safety creates a moral obligation to design our products, laws and environment in such a way that safety can be protected even in unconventional circumstances or impairments. In terms of AI, safety has come to encompass several different conversations, including the following:

1) AI as an existential threat

The conversation around AI as an existential threat takes a highly speculative and future-oriented stance towards artificial intelligence. It focuses on asking what kind of threats to humanity are posed by AI systems if they become too complex to control (this kind of “superintelligence” scenario is painted by thinkers such as Nick Bostrom and Ray Kurzweil).

However, the plausibility of a future of superintelligent AI has been called into question, both by philosophers and technologists. As things stand, there is no reason to assume that superintelligence will emerge from developing contemporary algorithmic methods.

2) Safety in AI

The second interpretation of safety in AI is the practical question of designing systems which behave in a safe and predictable manner. As AI systems are integrated into ever-widening areas of life, it becomes more important that the systems are well designed to account for the complexity of the world. A very practical and already existing example of this is lane guard technology, which uses machine learning to prevent cars from veering outside of their lanes. Machine learning researchers have found that some lane detection algorithms are quite easy to confuse with rogue road markings, causing the car to veer off the road by following the fake lane markings.

The ethically – and legally – significant question is “what are the acceptable limits to robustness?” It is certainly conceivable that there are a set of circumstances so incredible that even if the system’s safety cannot be assured, we can concede that “nobody could have realistically seen that coming”. Where this limit is, though, is a difficult problem, and definitely not one that is exclusive to AI or even technology.

Nonetheless, the progressive zeal that is attached to AI future visions has brought up questions regarding the limits of safety and the taming of environmental uncertainty in a way that is wholly new. An example of this can be found in the discussion around autonomous vehicles.

3) Producing safety with AI

The third concept of safety and AI we will look at in this section is the production of safety through the use of AI. Can AI make the world safer? Can AI make the world feel safer? And safer for whom?

Robotization can provide an example of this concept in practice. The work of handling hazardous materials or working in hazardous environments can be delegated to robots, protecting the health of human (or animal) workers.

Another way certain forms of safety are produced through AI is through automated surveillance. AI-powered surveillance has manifested in many domains: in public spaces, in law-enforcement work through predictive policing, and in domestic life through products like Amazon’s Ring. Although surveillance cameras (CCTV) have existed and dominated public and semi-public spaces for decades, the ACLU argues that automation brings about a big qualitative shift in how surveillance functions. But what is so different?

“Imagine a surveillance camera in a typical convenience store in the 1980s. That camera was big and expensive, and connected by a wire running through the wall to a VCR sitting in a back room. There have been significant advances in camera technology in the ensuing decades — in resolution, digitization, storage, and wireless transmission — and cameras have become cheaper and far more prevalent.

“Still, for all those advances, the social implications of being recorded have not changed: when we walk into a store, we generally expect that the presence of cameras won’t affect us. We expect that our movements will be recorded, and we might feel self-conscious if we notice a camera, especially if we were doing anything that we feel might attract attention. But unless something dramatic occurs, we generally understand that the videos in which we appear are unlikely to be scrutinized or monitored.”

-The Dawn of Robot Surveillance, ACLU

Moreover, it is an open empirical question, to what extent AI surveillance is really producing safety. As the example of chilling effects illustrates, the existence of AI surveillance may itself contribute to a feeling of unsafety. Furthermore, it may directly contribute to actual unsafety and produce harm. AI-powered policing, for example, can lead to direct physical harm because of its predictive nature and the methods of enforcement. And when ubiquitous and automatic surveillance allows even the most petty transgressions to be surveilled and logged, it risks making the consequences of policing more damaging than the original crime.

With the disparate levels of policing, disparate methods of enforcement, and disparate levels of surveillance across communities, most clearly along racial dimensions, it becomes clear that AI surveillance creates a different kind of safety (and unsafety) for different people. Again, like before, the value of safety becomes entwined with other ethical values such as justice and non-discrimination.

4) A safe and healthy environment: AI and climate change

Safety also means the right to a safe and healthy environment. Nowadays, this right is threatened by climate change. The effects of climate change are already visible – storms, droughts, fires, and flooding have become more common, more frequent and more devastating. Global ecosystems are changing. They all impact the environment on which our existence depends. The report on climate change (2018) estimated that the world will face catastrophic consequences unless global greenhouse gas (GHG) emissions are eliminated within thirty years.

To summarize, safety plays into AI technologies in multiple different ways. These all raise questions about the balancing of normative values: while calls to make “AI for good” sound promising, in practice the enactment of rights and normative values in technological systems often collides with the plethora of conflicting interests and deep injustices existing in the world. When evaluating safety, it is then important to evaluate what other rights intersect in practice and ask, “safety for whom?”



Inclusion and the gender divide

Inclusion means that all people, regardless of the characteristics that make them different — be it race, gender, sexuality, ability, or some other feature — have an equal right to fully participate in society.

Inclusion of disabled people

According to the World Health Organization (2018), more than a billion people live with disability. On the one hand, AI can marginalize and exclude disabled individuals even more. On the other hand, AI technologies have an enormous potential to promote the well-being of these people. AI could “augment” and support humans with disabilities.

For example, there are several tools for developing communication and literacy skills that might offer support with understanding those who have cognitive disabilities and/or complex speech and language difficulties (such as dementia, cerebral palsy, and autism). Moreover, the development of assistive technologies using AI, such as description of images for blind people, speech recognition, captioning for hearing impaired people, sign language recognition and creation for deaf people, multilingual text-to-speech options for reading text for those with cognitive disabilities including dyslexia, care robots for elderly people and mobility guides for visually impaired people provide other examples.

smartphone

Inclusion and the gender divide

Many technology researchers have paid attention to the “gender gap” or “the gender divide”. This gap has many faces. First, according to Unesco, women’s digital and algorithmic literacy skills are not at the same level as men’s. Women are less likely to know how to operate or use computers, navigate the internet, use social media and understand how to safeguard information in digital media – abilities that underlie innumerable life and work tasks and are relevant to people of all ages. UNESCO estimates that men are around four times more likely than women to have advanced information and communication technology (ICT) skills such as the ability to program.

Women are also less likely to create cutting-edge technology. According to studies, across G20 countries, just 7 per cent of ICT patents are generated by women. A recent survey of undergraduate students in 29 countries found that early adopters of new technologies are overwhelmingly male. Calculations based on the attendees of the world’s top machine-learning conferences in 2017 indicate that only 12 percent of the leading machine-learning researchers are female.

Technology studies indicate that technology often reflects its developers. For example, despite the fact that most low-income women in developing countries are primarily employed in agriculture, men have been the primary adopters and developers of agricultural technologies, and agricultural innovations and tools have been designed specifically for men’s use. As a result, many tools were developed to pursue men’s work in farmlands, and are known to be too heavy for women to push or have handles that women can’t reach. Precisely the same often applies to contemporary smart technology – voice command systems do not recognize women’s speech, and the sizes of smartphones and computer keyboards are not suitable for women users.

Inclusion is, however, about more than ensuring that women are able to participate. It is an obligation to ensure cultural, age-based and geographic diversity and their intersects. When thinking about the impact of AI in society, there is, simply, a need to intentionally develop systems in different settings and with diverse users.

You have reached the end of this section!

Continue to the next section: