A research identifies gender biases in Internet algorithms and proposes specific training to eliminate them
- Scientific Culture and Innovation Unit
- November 23rd, 2022

A research led by the Open University of Catalonia (UOC) and with the participation of researchers from the University of Valencia (UV), the Polytechnic University of Valencia (UPV) and the Polytechnic University of Catalonia (UPC), among other centres, proposes measures to eliminate gender bias in Internet algorithms. Among others, that the people involved in creating an algorithm know the measures that can be taken to minimise possible biases and apply them. The research has been published in the Algorithms magazine.
Rivers of ink have been written about whether internet algorithms have a gender bias. A new study tries to reach a conclusion on the question, given that they consider that until now, the debate has been far from a scientific analysis.
“This article addresses gender bias from a mathematical perspective and also from a sociological one. My perspective is that there must be interdisciplinarity in this matter. Gender is a social concept and, therefore, algorithms must be treated not only from the perspective of Mathematics, but also from the perspective of Sociology”, highlights Assumpta Jover, signatory of the article, PhD in Gender Studies by the University of Valencia and Master’s Degree in Gender and Equality Policies with a major in Research and Analysis of Equality Policies from the University Institute of Women’s Studies.
The multiple sources of gender bias, as well as the particularities of each class of algorithm and data set, make eliminating this bias a particularly difficult challenge, but not an impossible one. “Designers and all the people involved in its development need to be informed about the possibility of having biases associated with the logic of an algorithm. In addition, they must know the measures that exist to minimise possible biases as much as possible and apply them so that they do not occur, because, if they are aware of the discriminations that occur in society, they will be able to identify when their developments they reproduce”, proposes the main researcher, Juliana Castañeda Jiménez, an industrial doctoral student at the Open University of Catalonia (UOC), under the direction of Ángel A. Juan, of the Polytechnic University of Valencia, and Javier Panadero, of the Polytechnic University of Catalonia.
The novelty of this work lies in the fact that it has been promoted by specialists in different areas, including, among others, a sociologist, an anthropologist and experts in gender or statistics. “The team members completed a perspective that goes beyond the autonomous mathematics associated with the algorithm, and thus we were able to consider the algorithm as a complex socio-technical system”, describes the main researcher of the study.
Algorithms are increasingly used to decide whether to grant or deny a loan or to accept applications. As the number of artificial intelligence (AI) applications increases, as well as their capabilities and relevance, it is more important to assess the possible biases attached to these operations. “Although this is not a new concept, there are many cases in which this problem is not studied, so that the possible consequences are ignored”, say those responsible for the research, which is mainly focused on gender bias in different AI fields.
These prejudices can have great impacts on society: “Biases affect all people who are discriminated against or excluded or associated with a stereotype. For example, they could exclude a gender or a race from a decision-making process or, simply, assume a behaviour determined by gender or skin colour”, Juliana Castañeda explains.
According to Castañeda, “it is possible that algorithmic processes discriminate by gender, even when they are programmed to be ‘blind’ to this variable”. The research team – which also includes Milagros Sáinz and Sergi Yanes, from the Gender and ICT group (GenTIC) of the Internet Interdisciplinary Institute (IN3), Laura Calvet, from the Salesian School of Sarrià, as well as Ángel A. Juan – shows it with several examples: the case of a well-known recruitment tool that preferred male candidates to female candidates, or that of credit services with less favourable conditions for women than for men. “If historical data is used and it is not balanced, you will probably observe a negative conditioning related to black, gay and even female demographics, depending on when and where this data is”, says Castañeda.
Men, sciences and women, arts
To find out the degree to which these patterns are affected by the different algorithms we face, the researchers analysed previous work that identified gender biases in data processing in four types of AI: the one that describes applications in processing and generation of natural language, in charge of decision management, and facial and voice recognition.
Overall, they found that all algorithms better identify and classify white men. In addition, they observed that they reproduced false beliefs about what the physical attributes that define people should look like according to their biological sex, ethnic or cultural origin, or sexual orientation, and that they also stereotypically associated masculinity and femininity with the sciences and the arts, respectively.
Many of the procedures used in image or voice recognition applications are also based on these stereotypes: just as cameras recognise white faces better, audio analysis has problems with higher-pitched voices, which mainly affects women.
The cases most likely to present these defects are the algorithms that are built from the analysis of real data that has a social context associated with it. “Some of the main causes are the under-representation of women in the design and development of AI products and services and the use of gender-biased datasets”, notes the researcher, who believes the problem is related with the cultural environments in which they are developed.
Article: CASTANEDA, J.; JOVER, A.; CALVET, L.; YANES, S.; JUAN, A. A.; SAINZ, M. «Dealing with Gender Bias Issues in Data-Algorithmic Processes: A Social-Statistical Perspective». Algorithms (2022). https://doi.org/10.3390/a15090303
File in: Investigació a la UV , Facultat de Ciències Socials , Internacionalització recerca , Grups de recerca , Recerca, innovació i transferència , Difusió i comunicació científica