Algorithmic Management in Hiring: How Does the EU Address the Risk of Discrimination?

The expansion of artificial intelligence (AI) is bringing about changes and challenges in many sectors, often faster than society can fully adapt. From a labour law perspective, one of the most significant impacts of AI is the advent of algorithmic management in hiring processes.

AI tools in recruitment can serve a wide variety of functions. These range from simple keyword-matching systems to compare candidate resumes, to algorithms able to predict future job performance or to assess personality traits through specially designed games. These tools are beneficial for both employers and job seekers: employers can automate long, expensive selection processes; candidates may benefit from faster recruitment procedures that should, in theory, be more impartial. Algorithms, after all, are expected to be free from all the unconscious biases that human recruiters inevitably bring to the process. However, despite their potential advantages, AI systems have already shown serious discrimination issues.

AI operates through the so-called “machine learning” system, a process that requires vast quantities of data (the “training data”) from which the system learns and develops its patterns. In recruitment, the supplied training data consists of profiles of successful employees, which the AI uses to determine what traits are favourable or unfavourable for specific job roles. For example, if an AI system is tasked with hiring new managers and is trained on the resumes of successful managers, it may wrongly assume that some of their common traits (such as being male or having a certain background) are indicators of future managerial success, with the risk of perpetuating and reinforcing existing social biases. The fundamental issue with AI in recruitment is that the algorithm misinterprets correlations as causality. To return to the earlier example, if most top managers happen to be white men, an AI system may infer that being a white man is a necessary condition for managerial success, when in fact it is merely a reflection of existing discriminatory patterns in society. This could result in certain groups being perpetually restricted to stereotypical roles and denied opportunities in areas where they are underrepresented. Furthermore, AI systems apply their selection criteria so impartially, that they risk causing large-scale indirect discriminations. For instance, if an algorithm is programmed to prioritize cost efficiency in the hiring process, it might disadvantage young women, who are statistically more likely to take maternity leave, despite their individual qualifications and abilities.

AI’s tendency to absorb and perpetuate societal biases has already been highlighted by several cases. In 2018, Amazon renounced to use its recruiting algorithm which was found to penalise women, due to the lack of successful female employees’ examples provided; in 2020, the Bologna Tribunal (Italy) condemned Deliveroo for using a discriminating algorithm to propose job opportunities to some riders to the prejudice of others, on the base of discriminating factors; a similar issue occurred in the United States, where a resume-scanning algorithm favoured candidates with irrelevant traits (being named Jared and having played lacrosse, a sport associated with white, upper-class culture), considered as determinative for a future good job performance.

These cases demonstrate the urgent need for effective regulation to ensure that AI systems are managed in a way that protects workers’ rights. In this context, the European Union (EU) has been at the forefront of developing comprehensive regulatory frameworks for AI, recently adopting the Regulation No. 2024/1689, which seeks to address the risks posed while still fostering innovation.

The EU in fact have been consistently supported the Member States in the development of AI, funding specific grants on AI applications and research programs and encouraging establishments of collaborative networks. At the same time, it has always been evident how the AI innovation must be balanced with the protection of fundamental rights, which can be seriously threatened by an inappropriate use of it. To preserve the basic European social values, the new regulation follows an approach both risk-based and anthropocentric.

From the first perspective, AI systems are classified into different categories based on the level of risk they pose. AI systems used in employment, worker management, and recruitment are explicitly classified as high-risk, as highlighted in Recital 57 and Article 6(2) of the regulation. These systems are considered high-risk because they can “perpetuate historical patterns of discrimination” in employment, a key concern for workers’ rights. As a result, the regulation requires these high-risk AI systems to comply with specific rules, including pre-market evaluations, ongoing monitoring, and rigorous safeguards to ensure the accuracy and fairness of the data they use. For instance, AI systems must ensure that their training data is accurate and consistent to prevent discrimination against legally protected groups. These obligations are distributed between AI developers and users, with a strong emphasis on principles of transparency, traceability, accuracy, explainability, and impartiality.

From a human-centred perspective, the EU seeks to maintain the primacy of human decision-making, with AI serving only a supportive role. The European Parliament emphasized this point in its considerations of the 13th March 2024, stating that AI systems should have “the ultimate aim of increasing human well-being”. This reflects a broader vision in which AI is viewed as a powerful tool to enhance human capabilities rather than replace human judgment, at least for now.

The overarching challenge is to balance the opportunities offered by AI with the need to protect workers’ rights, human dignity, and the principle of non-discrimination. The new EU regulation seeks to ensure that AI systems in recruitment processes are governed by human oversight, rather than allowing AI to autonomously dictate hiring decisions, preventing the erosion of fairness, transparency, and justice in the labour market. By implementing this regulatory framework, the EU aims to create an environment where AI innovation can thrive without compromising essential human rights and where the human judgment remains central to safeguard the integrity and equity of the hiring process.

Contacts

Elettra Stradella (coordinator)
University of Pisa, Department of Law

Palazzo Ricci
Via del Collegio Ricci n. 10

+39 0502212805
euwonder2023@gmail.com

social network:

Instagram: euwonder.social

Twitter: @EUWONDER_JM

Upcoming events

Back to top