Launched on April 21, 2021, the European Commission’s draft Artificial Intelligence Control aims to introduce mandatory rules for artificial intelligence (AI) systems for the first time.
Digital tools, the use of which has increased significantly with the health crisis, are often based on algorithms without always informing the general public. Now used in a variety of areas, such as social benefits, policing, justice, or access to human resources, these increasingly sophisticated algorithms are a source of progress, but also pose a risk to human rights.
Behind their apparent neutrality, research has revealed the amount of bias that can occur during their design and installation. Inequality has been noticed in Europe as a result of algorithmic processing.
In recruitment, gender biases have been identified in a number of algorithms used to sort out CVs, for example: they tend to systematically exclude applications from women. This change has also been noticed in other sectors. For example, in the context of the fight against fraud in the field of social benefits, especially for algorithms Data mining Thus the focus is on specific people due to their place of residence and their family situation.
At stake, obviously, is the risk of automated discrimination. Algorithm: Preventing the automation of discrimination 2020’s and Biometric Technology: Essential Respect for Fundamental Rights 2021, that the advances made possible by these technologies cannot be attributed to the loss of part of the population or at the expense of general surveillance.
At a time when new regulations are being debated in the European Parliament, rights defenders want to recall a fundamental requirement: the right to non-discrimination must be respected in all circumstances and access to rights must be maintained. Sure for everyone.
To this end, Defender of Rights is publishing an opinion today entitled “A European AI that protects and guarantees the principle of non-discrimination”, a European network of organizations promoting equality, co-produced with Equinet, of which it is a member.
The recommendations issued in this opinion are consistent with the organization’s previous work, emphasizing the priority of the fight against algorithmic inequality and the role that European equality organizations can play in this regard.
Among the protections required by the regulations, Opinion recommends:
- Make the principle of non-discrimination a central concern in any European regulation dedicated to AI.
- Establish accessible and effective complaints in all European countries and provide redress for data issues in the event of such violations as a result of the use of AI systems in violation of the principles of equality and non-discrimination or other fundamental rights.
- Apply a basic rights-based approach to defining the concepts of “loss” and “risk”, and not the approach taken from product protection systems.
- There is a need to evaluate the effects of the former and the former parity at regular intervals throughout the life cycle of the AI system.
- Assign mandatory and applicable “equality obligations” to all AI designers and users.
- Make possible risk differentiation only after a mandatory analysis of the impact of non-discrimination policy and other human rights.
- Enforce the implementation of future AI regulation provisions by forcing the new national supervisory authority to consult with equality bodies and other bodies capable of fundamental rights.
Establish a collaborative process that allows the various agencies involved in the implementation of the AI Regulation to coordinate at the European and national levels and make adequate funding mandatory.