In April 2021, the European Commission unveiled an ambitious Artificial Intelligence (AI) regulatory project. Together with Parliament, the Commission is seeking a legal approach that would support innovation but respect “European values”: privacy and human rights.
Problem: The definition of what is artificial intelligence varies from conversation to conversation … these are not software or statistical methods. Some AI will be weak (hyper-specialized), others will be strong or general (capable of transferring acquired capabilities from one area to another, quite different). Agreeing to a law means agreeing to a specific way of qualifying. Then start with the responsibility and follow a string of questions: If there is a problem, who should you go to? Manufacturers? The ultimate software provider? Also more?
Any risks for any system?
In its bill, the European Commission introduced a classification of algorithmic tools according to four types of risk: unacceptable, which would lead to a ban; High risk, which requires adherence to various guidelines before deployment; Limited risk, which would require transparency and minimal risk to be able to correct. Among the sanctions, the European Commission has listed subliminal manipulation or social ratings or even predictive policing used in China. But that will not stop the heated debate: security is an area where union countries do not like to see their policies dictated.
Other sections also combine very diverse topics that need to be adjusted. For example, systems used to test, facilitate recruitment, or assist in legal decisions are considered “high risk.” This classification includes many elements related to surveillance in public spaces, especially burning issues.
Will European regulations prevent surveillance in public spaces?
What to do with biometric recognition algorithms: Do we allow ourselves to be used in certain cases, in the case of terrorist attacks, to locate victims of abductions? Do we ban everything, as ordered by a team from the European Association? In October, members of the European Parliament called for a complete ban on facial recognition in public spaces and predictive policing technology tested by Palanti. The resolution also targets personal databases such as Clearview AI. Germany is also among the countries pushing for a complete ban on these technologies, both in public and private spaces, on the grounds that it will be widely monitored.
The reactions were not long in coming, indicating the risk of the union’s dependence on other countries if its laws hindered the innovation of its own entities. In France, where the legal framework has been scrutinized and the use of oral recognition has been questioned. The argument of three senators who wrote a recent study on the use of facial recognition in Europe or a report on biometric technology is as follows: Doesn’t raise the issue, as London police can.
What do the risks of discrimination cover?
Another major issue that European regulations need to address is encoding inequality. In the Netherlands, for example, algorithms for managing welfare fraud have mistakenly charged 26,000 families and forced them to pay off debts that they did not agree to, sometimes leading to financial ruin. If the case leads the government to resign, in early 2021, it is also a reflection of the social risks posed by artificial intelligence.
The tool has also been accused of racial profiling, which raises another major axis of algorithmic inequality against which the European Union must defend itself. If they are regularly improved, it is common knowledge that facial recognition techniques work less well on dark skin than on light skin, for example. However, a number of American cases have shown that people have been misjudged because of their skin color because of the biased results. Actors like the NGO Access Now have called for urgent supervision, simply because the union is testing various algorithmic tools at the border to manage the migrant population.
Other big issues in the debate?
The simple classification of specific algorithmic tools adds to the discussion: if the European Commission places emotions in the “low risk” category, for example, companies like CNIL qualify them as “highly undesirable”. . And what about ad tracking systems? Are they high risk, or only medium level
Another major challenge is raised by the degree of clarity and interpretability of algorithms. Technology companies are relatively reluctant to give outside agents (auditors, regulators) access to their source code for one thing. But European regulation also provides that the data sets used to train the algorithms are error-free, to facilitate the justification of the given results. This seems to be very complicated to achieve when you know that the ten most used datasets by industry are full of them.
What is a calendar?
For human rights protection organizations, the text proposed in April 2021 was not specific enough to ensure the protection of the rights of Europeans. On the other hand, the Special Committee of the European Parliament on Artificial Intelligence in the Digital Age has expressed concern about the potential limitations of innovation in November 2021. After much deliberation, the Internal Market Committee and the Civil Liberties Committee of Parliament jointly took up the bill.
Theoretically, in order to vote on a final version in November, legislators would have to compromise on amendments to be introduced in mid-October. The text can then enter the trilogy phase, i.e. discussions between Parliament, the Council and the European Commission. But some observers have expressed skepticism about the possibility of having such a schedule, given the highly sensitive nature of the issues covered by the regulations. Last June 1, special journalist Luca Bertuji Has announced that about 3,200 amendments have been introduced, suggesting intensive negotiations in Brussels this summer.