Dipfeck enters PAF

Behind Ardison’s performance is McGuff Studio. Its co-founder describes how the deep-seated image maker’s work is hampered: “Professions are being invented. It’s a new El Dorado, a jungle.”

If We Resurrect the Dead Using Artificial Intelligence … to Interview Them? It was an incredible bet that Thierry Ardison “revived” Dalida, a sacred monster of French music, for an hour and a half on May 2 with his program “Hotel du Temps”. To make this possible, host French Research Behemoths has offered the services of IRCAM for Voice and McGuff Studios for Photography. Its head, Rodolphe Chabrier, is one of the most experienced in digital image and VFX. Along with his studios, he was with Gaspar Noé, Matthieu Kassovitz, Jan Kounen or Michel Ocelot – and we are indebted to Second Studio Illumination McGuff, a collaborator at Universal, Animation Box. M, dispakable m.

The “L’Hôtel du Temps” program is not the first masterpiece of the technical masterstroke studio weapon. Behind the facial revival of Mathieu Amalric and Aleksey Gorbunov in the Le Bureau des Legendes series, there is no make-up, but a deep learning tool, created by Rodolphe Chabrier and his team, “Face Engine”. A technology that changes discipline cards. “Tomorrow, we can create someone who can dance like a foot dancer, as well as Michael Jackson,” smiled Rudolf Chabrier. So is digital imaging a dead, long-lived deep learning? The answer is not so simple. On the other hand, it is really a revolution! Maintenance.

Your studio specializes in digital imaging. Since when have you been using artificial intelligence?

Rudolf Chabria: We have been interested in deep learning since 2018. I can’t tell you when we changed. But yes, now Artificial intelligence is in the process of permanently disrupting the profession But above all it’s a matter of drawers. We chose to adopt an artificial intelligence model that we developed as a “face engine” and worked on the face. This work was first visible to the general public, last season Legendary Office – For which we got a technical Caesar, then also in the series Arsene Lupine For Netflix. (The characters, played by JJA and Karlov, Matthew Amalrick, and Alexei Gorbunov, play their own roles, revived over 30 years by the magic wand of the model McGuff created, the editor notes). Their wounded faces are healed by the process.

This is a network tool called GAN that lets you manipulate faces. We have built our own equipment using our own funds and with the help of CNC. Thanks to the globalization of knowledge and open-source, the “Face Engine” is a combination of the AI ​​research community and the many AI models provided by our digital imaging knowledge, developed over a period of 35 years. I have no doubt that we will be overwhelmed by the patrol, but for the moment, we have a definite technical and organizational leadership. It was supposed to be a movie like the old world Tea Irish people With ultra-sophisticated, heavy and extremely expensive 3D technology. (In the film, Al Pacino and Robert De Niro are revived, the editor notes) Our plan was the same thing, but with tools based on deep learning. We were lucky that shortly before the epidemic began, Thierry Ardison came to see us with his ideas for a show, which confirmed that our approach was good.

Initially, your interest in AI was not of interest to many?

Rudolf Chabria: She was considered our dancer, an “internet thing” that would never be used in broadcast mode (Ideal real-time for broadcast, TV or Netflix broadcasts, for example, editor’s notes) However, applications based on AI models were also seen on smartphones. Of course, what is produced in this format cannot be used in movie production, but it does indicate a trend. With my partner Marshall Valanchan, we threw ourselves into it. When everything goes off for the first three months with Kovid, it allows us to keep moving forward.

I can’t go into too much production detail. We don’t just have software, you just press a button and go This is the whole problem with artificial intelligence. These are black boxes, we do not control anything but we have the process of having a minimum control lever and, above all, be able to achieve exploitative and consistent things for “broadcasting”. On the other hand, we have to invest a lot for this. The computing requirements of CPUs (central processing units) are substantial, but the GPUs, and therefore graphics processing, are huge.

In the “Hotel du Temps” program By Thierry Ardison, You literally brought Dalida back to life. The image is surprisingly realistic. Dipfek is applied to the face?

Rodolphe Chabrier: Yes, “Hôtel du Temps” has a deep copy base (multimedia synthesis technique based on artificial intelligence that can be used to super-impose existing video or audio files in other video or audio files, editor’s note). But it wasn’t that easy. Large datasets are required for AI models. To get a very realistic rendering, when you have multiple image sources, there are hundreds of hours of 4K images. For the project, we are talking about 50, 60 year old sources; We had to do a lot of processing work upstream so that the materials we had were compatible with the deep learning tools. And I’m not just talking about working in pixels to make images usable, but what makes a good dataset, what we need …

And it’s about the question of facial work, yes, for now, AI works on the face with the “face engine” (Host and an actress who played Dalida and who learned the artist’s gestures, from a photo shoot with the editor’s note) But we are already working to extend the process to the body (body engine), and even the environment (global engine). We’re already seeing sequences of some video games: they’ve been passed through AI models, fed by Paris city databases. The rendering is perfectly realistic. Tomorrow, we can dance to someone who can dance like Michael Jackson. Or retrieve all the information from a James Dean and get an actor like him to work with a call of artificial intelligence. This is tomorrow in two to three years. And in the long run, we can imagine that there will be no more actors and a virtual character with the face and demeanor of James Dean. Okay, that would obviously require a lot more resources.

In fact, what is interesting about artificial intelligence is that we no longer create objects. Machines are made to make objects based on rules. These models must be understood in the sense of mathematical or climate models, if you will.

Kind of like an operating system?

Rodolphe Chabrier: Not quite. They are not objects within themselves, but models capable of understanding the world. “Face engine” means what a face is, for example. To design it, we feed it with data, but above all, we have to train it. More than training, we must educate it. Just like raising children, poor parenting will make them rude or superstitious. It’s the same thing. It’s hard to get back with an AI model or you’ll have to start almost from scratch, because the number of iterations is so high. Millions of loops are created. It can take hours, days, or even weeks to get a recognizable result. And we must keep an eye on whether the path taken to correct the situation is the right one.

But is it compatible with the very tight production period of TV shows or movies?

Rodolphe Chabrier: On the contrary! It’s completely consistent. Proof, we created more than an hour of visual effects for Thierry Ardison’s show. The time it takes to create a model. Once it’s designed, it’s easy to span at a lot of speed. Generally, we speak of “produced seconds”. It takes a graphic designer an hour a year to work alone on a visual effect. Once the model is in place, you can enter 10 minutes and the next day, get a first result. I’m definitely exaggerating. But it changes everything. In the case of work on a face, this means that we can start work even before the shooting or planning is validated.

Is this a new job for you?

Rudolf Chabria: The profession is being invented. It’s a new Eldorado, a jungle. I got the feeling of finding myself 35 years ago when we were doing 3D with PCs. We will need data experts who know how to recover, process and improve them; Graphic designer; The developers of these tools, but also AI computing operators who will be educators, are, in fact, able to observe and correct AI. For me, I see myself as a chef with a brigade of multiple talents who have the intelligence to process, manage calculations, and integrate data. I recommend that we take this ingredient, that we put it in the oven, then pass it in ice water or hair dryer and we see if it worked. There is something very organic about the way it works. Once the recipe is found, the pattern is found. The industry is to create AI models.

Leave a Comment