University of Valencia logo Logo Open Government, Participation and Open Data Chair (PAGODA) Logo del portal

Conference: “Vulnerability, discrimination and algorythmic biases: A new roadmap in Europe...” by Mª Teresa García-Berrio

  • October 3rd, 2024
 

Conference: “Vulnerability, discrimination and algorythmic biases: A new roadmap in Europe to safeguard basic rights against ‘unacceptable risks’ related to AI”

Mª Teresa García-Berrio Hernández
Full University Professor in Philosophy of Law at the Complutense University of Madrid

 

The proposal of a common european juridical framework on Artificial Intelligence (AI), as proposed in the Regulations (EU) 2024/1689 that states harmonised regulations on Artificial Intelligence, incorporates a system based on the management of potential discrimination risk of the users of AI systems. In this sense, one of the most controversial matters during the ellaboration of legislative process of the European regulations on AI was precisely its proposal to set an adequate degree of prohibition to those AI systems that could present “unacceptable risks” to safeguard fundamental rights and to ensure equality and no discrimination to those especially vulnerable users.

Human beings can be especially vulnerable to the bad influence of AI due to the psychological harm it can exert on our subconscious. Facing the potential risk of exploitation of our human vulnerability, the European legislation has decided to give a reinforced protection to those who can become vulnerable users of AI systems because of their age, a possible disability, their socioeconomical status or their own personal characteristics.

Particularly, the European regulations on AI qualify those systems that represent a direct threat to people and to the safeguard of their fundamental rights as “unacceptable risk” systems and forbids them. This prohibition extends itselft to three essential modalities of AI systems.

(i) Firstly, the prohibition applies to those AI systems that use cognitive manipulation in the behaviour of individuals or vulnerable groups, such as children and teenagers. This prohibition aims to address the potential of AI devices to promote dangerous behaviours in children or to encourage suicidal behaviours in teenagers. (ii) Secondly, the prohibition extends itself to those AI systems that use algorythms to create identity biases to classify people based on their socioeconomical status or personal characteristics -such as race, sex, nationality, sexual orientation or religion-. (iii) Lastly, the European regulations forbid especifically those AI systems that use biometric identification, in real time as well as remotely, or that use facial recognition devices for vigilance.

 

Reconocimiento facial

 

The ethical implications of the clauses of the article 5.1 (a) of the European regulations on AI are meaningful, because this disposition simultaneously indicates two different ways to exert control, influence or manipulation on the subconscious of users of AI systems, either through (i) manipulation in the decision-making processes as well as through (ii) misinformation to alterate ethical, moral, political or ideological convictions of the users. This second way of influencing or alterating the cognitive freedom of the AI users through misinformation -or even through misleading information, such as the case of fake news- requires a thorough investigation.

If we make a wide interpretation of the article 5.1 (a) of the European regulations on AI, we can conclude that if we accept the proposal of the human subconscious should be protected by law, not only should we ban the subliminal techniques that are deliberately misleading or manipulating that those AI systems use, but also we should consider aditionally another juridical matter of analogous management for those AI systems susceptible of taking advantage of the vulnerability of human beings.  In particular, the article 5.1 (b) of the regulations prohibits those AI systems that expoit the vulnerability of people “(...) with the aim to distort materially the behaviour of an individual in a way that they can cause a negative harm”. This second disposition has a very wide application area, because it includes AI systems that directly interact with users - as it is in the case, for example, of chatbots-, as well as other AI systems based on recommendations -the so-called “recommendators” in computer language.

However, identifying those areas of vulnerability that can make an AI system be banned under the article 5.1 (b) of the European regulations on AI is quite difficult. Not only because of the ambiguity of the concept of vulnerability used in the articles of the regulations, but especially because the regulations make the “vulnerable user” responsible. This last point is with no doubt a plausible defficiency of the European regulations because, in a real case, a person who intends to prove that an AI system has exploited a vulnerability trait of their personality, of their possible disability or their socioeconomical circumstances, they will have to deal with the challenge of bringing together convincing evidence to proof the malicious intention of the AI system.

Likewise, another mean used by AI systems to perpetuate discriminatory structures through selection and ponderation of the variables that the algorythms use for measure and prediction. For example, if a credit qualification of a bank prioritises an income level over the saving capacity as an indicator of solvency of its clients, this decision will mean with no doubt a major disadvantage for certain communities who statistically have a more reduced income level: as women, young workers, immigrants or pensioners. Indeed, any inaccurate or mistaken selection of personal data during the training of an AI system can lead the algorythm to make unfair decisions based on preconceived beliefs, predilections -or even in social stereotypes- that lead inevitably to stigmatization of groups, minorities or vulnerable people.

 

AI

 

Therefore, algorythmic biases are mistakes generated by AI systems that lead inevitably to strengthen (i) the modality of discrimination because of differentiation, that is, when people are treated the same way under the law, even if their circumstances differ, as well as the (ii) discrimination because of indifference, that is, when people are treated differently even when their circumstances are comparable.

If we pay attention to the doctrine of the European Court of Human Rights (ECtHR), it would not seem likely that the second modality of discrimination because of indifference would get to have a meaningful impact on the safeguard of fundamental rights. However, with the progressive implementation of predictive AI systems, the order has been reversed and each day there are more examples we can find of this second modality of discrimination in the daily use of ante facto algorythms, as in the case of proliferation of public spaces of AI devices of facial recognition for vigilance.

Aditionally, we must remember that in the traditional European framework of fundamental rights against the risk of discrimination have been integrated historically in two different categories of juridical instrumens for safeguarding people against discrimination. (i) The first category would include the so-called preventive or ante-facto instruments against discrimination and (ii) the second category would include the reactive or post-facto instruments against discrimination.

(i) The first modality of antidiscriminatory instruments work as preventive mechanisms, because they articulate commands that aim to avoid discriminatory decision-making. We can find an example of that modality of ante-facto antidiscriminatory instrument in the article 9 of the Regulations (EU) 2016/679 relative to the protection of physical persons regarding the handling of personal data and the free circulation of these data, as it bans handling especial categories of personal data; particularly, “(...) data that reveals the ethnical or racial origin, political opinions, religious or philosophical convictions, or sindicalist afiliation, and handling of genetic data or biometric to identify with no mistake a physical person, data related to health, to sexual life or sexual orientation of a physical person’’.

(ii) On the other hand, the second type of antidiscriminatory instruments, the ones known as reactive or post-facto instruments, that want to revert those structures that places people who belong to certain groups or minorities in situations of disadvantages or discrimination.

We do not need to say that the majority of antidiscriminatory mechanisms intervene ex post, that is, after the situation of discrimination has happened. However, in the case of algorythmic biases, the discrimination has an ex-ante preventive nature, substanced in integrated mistakes in the databases that will be used later for the development of automatized systems in decision-making. Likewise, the use of algorythms can be used to perpetuate stereotypes that support these social structures of discrimination. Indeed, when an AI system uses predictive algorythms, validation and proof data used to train the system will represent probably some of these discrimination structures aplicable to certain racial and ethnical groups and religious minorities in historical positions of disadvantage; this would have as a result that the AI system assumes the biases that it includes are accurate or valid. In fact, it has been proved that when AI systems are used as predictive tools to generate criminal profiles when it comes to serious crimes -such as homicide, murder or kidnapping-, the training and validation data used by the developers of this AI system get the information using data related to court records and cases that have been resolved. Therefore, the system will tend to use a higher number of masculine profiles that belong to certain ethnical groups or minorities to generate “suspicious categories” or “ante facto prevention categories” with an eye towards the prediction of future unlawful cases. In this sense, the european legislators take a stand when in the section 4 of the article 10 of the European regulations states that “(...)   The group of data will take into account, to the extent necessary for the intended purpose, the characteristics or particular elements of the specific geographical, contextual, conductual or functional environment in which it is planned to use a high risk AI system”.  The aim of this disposition is no other than to assure that “high risk” AI systems work as expected and in a safe way, and that they do not end up being a source of discrimination.

 

AI. Vulnerabilidad, Discriminación y Sesgos algorítmicos: Una nueva hoja de ruta en Europa para la salvaguarda de derechos fundamentales ante “riesgos inaceptables” asociados a la IA.. Cátedra Pagoda UV

 

In spite of the large amount of challenges that can be observed facing the risk of discrimination by algorythmic biases, it is hopeful to see the important advances in the last years, such as the introduction in the final writing of the European regulations regarding AI of the Recital 44. In this Recital, the European legislator states those mandatory requirements that the “training, validation and proof data of the high-risk AI systems that identify or infer emotions of the physical persons using their biommetrical data”. Especifically, the legislador manifests having doubts about the scientifical base of the AI systems that aim to identify or infer emotions, and justifies their opinion with the intrusive nature of these systems, which can inevitably induce a harmful or unfavorable treatment of certain physical persons or vulnerable groups. Therefore, the European legislator concludes that “the comercialisation must be banned, the commissioning or the use of AI systems meant to be used to detect the emotions of people in situations related to work and education.

Anyways, there are more than enough reasons for being optimistic in the current evolution of European regulations on AI regarding the protection against discrimination and algoryhtmic biases of vulnerable users. It is this way, on account of the framework of guarantees for “human supervision” of the AI systems that the article 14 of the European regulations regarding AI gives, demands the high risk AI systems are designed and developed in a way in which they can be efficiently supervised by physical persons “during the period of time in which they are being used”, with the aim to prevent or minimise health risks, safety or fundamental rights that those systems could have for their potential users.

In spite of the reliability and predictability that can be attributed to AI systems, we cannot put aside their recurring use in the exploitation of human vulnerabilities with malicious ends. In fact, the construction of algoryhtmic patterns that allow machines to anticipate human behaviour using an ex ante predictability, undermining the concept of human autonomy itself. That is why, when positioning the European legislator -physical person- as the only entity with the capacity to act in an autonomous way in the efficient supervision of high risk AI systems, we commit to carry out a good ontological labour regarding the acknowledgement of inviolability of autonomy and human autodetermination as the juridical good of special protection against the discriminatory challenges AI brings. We are facing the advent of a new ethical obligation that fights for the acknowledgement of moral intersubjectivity characteristic of the human condition in the control of AI systems regarding any risk of algorythmic discrimination.

 


Notes

1The ECHR defines the term “discrimination” in the WillisvsUnited Kingdom case, 11 September 2002, as “(...) treating people in substancially similar situations in a different way, with no objective and reasonable justification”. However, in the Thilmmenos vs. Greece case, 6 April 2000, the Court widened the scope of the clause to tackle the so-called “discrimination because of indifference”.

 

Author’s note

The present entry represents the contribution that, on the author’s part, refers to the “Observatory of the digital transformation of the public sector” of the PAGODA Chair of the Universitat de València for its further dissemination through its web. I would like to extend my gratitude to Lorenzo Cotino and Jorge Castellanos, coordinators of the refered Observatory, because of their kindness to invite me to participate.

 


 

 

Mª Teresa García-Berrio Hernández
Mª Teresa García-Berrio Hernández

 

Profesora Titular de Universidad de Filosofía del Derecho en la Universidad Complutense de Madrid.