“As the 2024 Olympics approach, we must test biometric recognition in public space.”

Figarovox / Tribune- For Vincent Barthett and Leo Amsalem, a law enforcement experiment with this technology, as recommended by the Senate, would respond to concerns about security and digital sovereignty.

Vincent Barthett is a lecturer at Lorraine University and an associate researcher at the Sorbonne Center for Economics. He is its author Making mistakes is humane, within the boundaries of reason
(CNRS edition, 2018; Biblis, 2021).

Leo Amsalem is a political scientist, Poe is a graduate of the Sorbonne and the London School of Economics.

They express together The New Oracles. How algorithms predict crime (CNRS edition, 2021).

Recent events at the Stade de France during the Champions League final have called into question France’s ability to ensure the safety of the 2024 Olympics. For Tokyo 2020, the Japanese authorities deployed the security device, the most expensive in the history of the Olympics. Games, which include technical solutions such as face recognition, virtual law enforcement and early crime detection programs. In terms of security, it is France’s turn to tackle the problem of AI technology. This is a deadline not to miss the reflection imposed by the prospect of Paris 2024, under the penalty of France and Europe lagging behind in the development of these technologies, imposed by world leaders on AI, including the United States and China. .

The large-scale deployment of this technology has always been the subject of serious preservation and criticism in France, but also on a European scale, as shown in a resolution in the European Parliament in October 2021 aimed at establishing a moratorium. Facial recognition technology is due to the fact that they are imperfect and create excessive risk for freedom. Does this policy position stand the test of reality in ensuring the safety of athletes, delegations and millions of spectators?

The challenge is to find soluble French and European solutions in our rule of law, and the only way to do that is through experimentation.

Vincent Barthett and Leo Amslem

Last November, we published a column highlighting the dangers of such a position: Sticking to a strict respect for civil liberties closes the door to risk-benefit analysis
Yet more capable of establishing reasonable and realistic choices. Faced with the – real – risks posed by these technologies, the challenge is to come up with French and European solutions that are soluble in our rule of law, and the only way to do that is through experimentation.

Last May, the Senate released an information mission report on biometric recognition in public space. 21 April in line with the proposal of the AI ​​Regulation
By the European Commission 2021, the authors of the report outlined a path to a realistic compromise free from a norm that would obscure difficult equations in security
And freedom; Between innovation and control. In these difficult but consistent aspects of Social Compact 2.0, we must rejoice that Parliament expresses a clear position and conducts dictatorship without compromising the necessity of public liberties, without depoliticizing a highly divisive issue.

This law will make it possible to approve some real-time facial identification practices, on an experimental basis, such as protecting particularly sensitive sites during large events in the face of a terrorist threat.

Vincent Barthett and Leo Amslem

The senators rightly pointed out that the legal vacuum that currently exists around these technologies does not really protect anyone and that a three-year trial law, with a public and independent evaluation system, could create a fruitful framework in which a French position could take shape. This law will allow
Allowing some real-time facial identification exercises on an experimental basis, such as securing particularly sensitive sites during large events in the face of a terrorist threat, or even monitoring a person who has committed a serious crime. It is still a question, an answer, about allowing intelligence services to establish a face recognition system to identify a wanted person or to reconstruct their journey.

It should be noted that a 2013 study by researchers at the University of Michigan found that facial recognition algorithms can quickly identify perpetrators of attacks.
The Boston Marathon, when it took police several days to do it. In 2005, after the London attacks, British police had to watch 6,000 hours of recording from surveillance cameras. A process that will benefit from being automated after testing and evaluation.

The experiments, encouraged by the Senate report, are the first step towards our ability to develop our own AI solutions.

Vincent Barthett and Leo Amslem

If it is necessary to anchor a French and European position, it is also an institutional work of our digital sovereignty. In the absence of a sovereign AI solution, we have
Pure hands but we have no hands, and are forced, when an emergency is forced to overcome our reluctance, to rely on less adapted foreign solutions with our choice.
Democratic experiments, encouraged by the Senate report, are the first step towards our ability to develop our own AI solutions.

Faced with new security challenges, we can now prioritize expectations over delays, which can lead us to rush or even submit. As public action calls for control, state surveillance must accompany any strengthening of the arsenal with an equal strengthening of guarantees aimed at avoiding any excessive or inconsistent use. With CNIL, France already has an expert and claimant organization that would benefit from being empowered by law to develop it as a true police force for biometric recognition. Only with this combination of experimentation, innovation and democratic control will we be able to unlock the full range of opportunities.
Their potential adverse effects are offered by new technologies when estimating.

Leave a Comment