Theodore Kristakis recommends, “The debate needs to be cleared up first to look at the most dangerous uses.”

In November 2019, the National Commission for Computing and Freedoms called for a “debate consistent with the challenge” for face recognition. In April 2021, when Politico A draft European regulation on artificial intelligence has unveiled, sparking further controversy, with 51 associations calling for a ban on “biometric mass surveillance” to protect individual rights.

In this context – and when the Senate released a report calling for the creation of a framework for the use of facial recognition in public spaces – that Grenoble University’s AI Regulation Chair – Alpes has published a mapping of the six facial recognition use chapters in Europe. 20 minutes The director of the team working on this long-term project has interviewed Theodore Kristakis.

“The current debate is sometimes distorted by a weak knowledge [la reconnaissance faciale] And its proper operating method “, CNIL announced in 2019, which you quoted from the introduction to your study. How did this observation affect your work?

When CNIL announced that “a debate consistent with the challenge” was needed, we noticed that debates over facial recognition often mixed up many different things. In some discussions, we went so far as to mention emotional recognition or video surveillance, where these two tracks are not face recognition. However, there are real questions to ask …

The technologies that PARAFE uses, when you are at the airport, or to identify a person in the British police crowd, are both based on facial recognition but do not increase the same risk.

With my team, we therefore decided to bring our scientific perspective into the discussion: our desire was to clarify things from a technical point of view, to elaborate on existing practices across Europe, and to learn from them. Lawmakers, politicians, journalists and citizens quietly debate.

AI Regulation Chair at Grenobles-Alpes University – taken from MAPFRE / AI Regulation / Facial Recognition Use classification established by Grenobles-Alpes MIAI

You have proposed to classify the use of facial recognition into three main categories: verification, identification and facial analysis – which is not literally facial recognition, but nevertheless works with facial features. Why should the three be separated?

The first section (in blue in the figure) is also called authentication. It compares one image to another: for example, your biometric passport photo with PARAFE while walking through the airport gate. The machine checks if the two match, if so it opens the door and then it deletes the data. It does not prevent the presence of risky or problematic use: when such technology was used in two high schools in Marseille and Nice, for example, CNIL considered it unacceptable.

But it is still different from the ID system, which is only used in the UK at the moment. There, we’re talking cameras that police place on the side of the road or near stations, which scan the crowd to watch matches with a pre-established list of thousands of criminals. In this case, the problems are very different: the person does not have the ability to refuse to be subject to technology, surveillance is conducted without control … that is, this type of technology is also used in tests at Mona, Lyon airport. There, if the user wishes, he can register his face on his smartphone and then go through all the checks – luggage drop-off, customs, boarding – without his boarding pass. He has a choice, so the question, even if it relates to facial recognition, is different from the question posed by the British police.

In the third part of your report, which deals with face recognition in public spaces, you emphasize the difference between “consent” and “volunteer” for using face recognition technology. What’s the risk?

First, it should be emphasized that even if a use is called “consensual” or “voluntary”, it does not prevent it from creating a problem. For example, in the case of PACA high school students, it was considered that their consent was problematic because they were under the authority of their school. Then, if we take the example of the airport again: when you arrive in Paris or Lyon, you may prefer to go through the door equipped with a face recognition system, but you have an option. This is volunteering: there are always other possible choices. Consent must be given by well-known people, able to give consent, etc. Subtlety is important, especially when the debate turns to “ban all face recognition.” This way of approaching the problem forgets that the technology has useful functionality: some use it to unlock their smartphone and others use a pin code if they do not want it. A choice is possible.

However, as a user, these two proposals put me at a very different risk when I fall victim to a system that treats me as a potential criminal because I crossed a road in front of a police camera.

The fourth part of your report deals with the use of oral recognition in criminal investigations. In your opinion, what are the terms of the debate?

There are many different uses for this. First of all, let’s say robbery or murder. The perpetrator was filmed by a live webcam. In this case, France has enacted legislation to allow the police to compare the image of the offender with the processing file of the criminal record (TAJ, whose existence is disputed, editor’s note). This is facial verification: it raises its own questions, but it is very different from applying facial recognition algorithms to video streams, as was the case during a carnival in Nice – by consent – or it is used in Britain.

The last part of your study will focus on the uses of facial analysis in public spaces, not too much yet present, but which you think should be multiplied. Why is it important to think about it?

Mask-wearing identification models such as those offered by Datakalab are not a face recognition because the so-called “biometric template” was not created. But it’s still a face analysis, so obviously there’s something to worry about. It’s the same for emotion recognition technology. When it comes to finding out if someone falls asleep on a wheel, it’s great, it can save lives. But when we tell you that we will let you identify personality or lies, we are almost in pseudoscience! (In this regard, read the chapter devoted to emotionsAI’s Atlas By Kate Crawford: Theodore Christakis declares himself a researcher, “completely agreeing” with the analysis of the editor’s note). Make masking figures, why not. Facial analysis is more controversial in every job interview.

What are your main recommendations for legislators and / or citizens?

Clear the debate. Be aware – that’s why we did this – but and above all we are explicitly referring to the cases we are talking about. This will allow us to first look at the most dangerous uses of facial recognition. This is important: the Senate has submitted a report on this question, it has applied for the creation of a European regulator, everyone must be able to understand the question accurately with interest in the use of each of these technologies. This will make it easier to see where there are already laws that allow for a minimal structure and where the flaws are most obvious.

Leave a Comment