Deepfax: Between technological innovation and a big security risk

The technology that drives Dipfeck (The contraction between “deep learning” and “fake”) was created in 2014 by researcher Ian Goodfellow. Named GAN (Generative Advertising Network), it is designed based on two algorithms, one to reproduce and fake video and the other to detect fake. Thanks to this intelligent system, the two algorithms always work together to improve and thus create a video that is as closely as possible to reality. These are based on databases like picture or video banks found on the Internet. Therefore, the more publicized a person is and the easier it is to find content about him, the more realistic Dipfac will be. This technology thus raises many problems, especially in the case of pictures or manipulation.

The first viral videos were published on the community site in 2017 Reddit. There are several thousand fake videos online, according to sources.

However, the use of this technology is not only for harmful purposes. In fact, one of the “realistic” videos would be very effective in movies made with AI. For example, it would allow dead characters to be resurrected. The CGI creation of actor videos already exists and has proven to be extremely useful:

In its set Fast and Furious 6 After the death of Paul Walker (one of the main characters), it was possible for the actor to finish the film by creating an integrated artificial image. DeepFac technology lets us go further by providing realistic images and audio.

This technology, although considered revolutionary by many, raises a number of questions as well as many concerns in both the public and private sectors. In fact, the incidence of abuse is increasing exponentially and in some cases can be very serious. In addition, their control has been complicated by a number of factors:

  • First, Easy access to many tools / applications for creating DeepFake (Face App, ZAO, Reface, SpeakPic, DeepFaceLab, FakeApp, etc.) This easy availability makes this type of practice accessible to anyone, including people with malicious intent.

  • But also, the difficulty of tracing as well as the implementation of control equipment, in fact GAN technology (Based on the principle of machine learning) to become autonomous through continuous learning with its two counter-forces, tracing is more complex. Control tools have late trains. For malware, for example, control technology responds to new dipfaces, so manufacturers always have a certain lead.

The risks posed by this new technology are multiple, in fact its application creates new problems highlighted by many companies and states.

There are risks Manipulation and misinformation. Hijacking through videos of some influential personalities (public or private) via dipfek can represent a serious risk. This is the case in Gabon, where President Ali Bango did not appear in public for several months after his illness in 2018. In December of the same year, a video showed Ali Bango reassuring his people about his health condition. His political opponents immediately called the video Deepfake. The hysteria created by this video accelerated a coup aimed at overthrowing the president. (The video was not deepfeck).

The risk of data or financial theft is also very high, especially through DeepVoice (allowing for artificial intelligence to reproduce a voice based on DipFac). The manager of a large bank in the UAE was the victim of this fraud, in fact the criminals reproduced the voice of one of his most important customers, allowing him to transfer up to 35 million euros, which caused a lot of damage. His image and his business.

In this sense, identity theft and insults to opponents are at great risk. This is the case, for example, with Rana Ayub, an Indian journalist who defends women’s rights. Time a Defamation campaign against him, he ended up in several pornographic pictures made with his mouth. This habit is becoming more widespread and mainly targets women.

In addition to the ease of making DipFac, their index growth warns the private and public sectors who are now trying to move away from it. This is especially true in the case of Google, which in 2019 released a database of more than 3,000 references. The company has launched a competition to identify these nets, in parallel with the support of other companies in the digital sector. Some companies go further and set up AI with the permission of “de-identification”. This is the case with Facebook, which is working with its FAIR research lab on this filter that can be applied to videos posted on the platform.

States, for their part, are seeking legislation to provide a standard framework to allow the restriction of DipFeaks promotions. On October 22, 2018, a The anti-manipulation law (“fake news”) is part of the French legal arsenal. This limits the spread of false information on the Internet. However, it has some limitations, such as freedom of expression as an individual, neutrality of major platforms (Facebook, Twitter, etc.) as well as the right to keep users’ names secret. All these difficulties complicate the implementation of the control lever.

Delays in the development of these control tools are alarming. In fact, the main fear is to create videos without any root source. Today, certain experts are able to return to the source with permission to create these Fake video. However, in the near future, experts agree that as the technology continues to evolve, the risk of videos appearing without root source is very high and will complicate any control.

We are not far from being able to create completely artificial content from a text description , Laurent Amsaleg is the research director and head of the LinkMedia team.

Pierre Parent For Risk Club

For the future:

Leave a Comment