When the net takes possession of the lamp

Our columnist Imed Baghjala describes in the news the growing progress of the Dipfec case and the risks that this use of AI represents … in the emerging era of metavers.

As mentioned in a previous column, fake news, infox, false news, false information, misleading information, or duck, a commercial, (geo-) political purpose or even false news disseminated for the purpose of deceiving or deceiving the public. Ideally identifying, decoding / deciphering, masking, avoiding, stopping, neutralizing and optimally suppressing … “Fake”)

Today, we hear many more promises with deep learning, a kind of more advanced machine learning, or quantum deep learning. It is a set of “machine learning methods” that attempts to model various nonlinear conversions through high-level data abstraction. Processing. ”Remember the deeper word here.

Also, we nominate a Diptech, a young shoot (start-up) that develops a product based on a significant, groundbreaking or disruptive engineering innovation. Three main criteria distinguish a Diptech startup from others: 1. A high added value of the product in the market, 2. The technology used is often protected by a patent and 3. A close link with scientific research (collaboration, patent license, research entrepreneur).

Infox, fraud, cyber-malware …

When Fake Deep takes over, it gives Deep Fake or Hyper-Faking, a disruptive technology / technique, invented by researcher Ian Goodfellow in 2014, called GAN (Generative Advertising Network or Generative Advertising Network) is a classless learning algorithm. Creates. With a high degree of reality. It is a multimedia synthesis strategy based on artificial intelligence (AI). It can be used in other video files, existing video or audio files (for example, changing a person’s face in a video) or audio (for example, reproducing a person’s voice and telling them things created). This morphing technique was originally used in the film world for dubbing or comedy shows.

This same tactic can also be used to create fake news and malicious deception. It is reminiscent of a recent video of President Volodymyr Zelensky lending false remarks to a pirate victim about keeping his army’s weapons on a Ukrainian channel. The video, which went viral, was later deleted by Facebook.

According to this technology, two algorithms train each other: one tries to make the mesh as reliable as possible; Other attempts to detect counterfeiting. Thus, both algorithms improve together over time through their respective trainings. The greater the number of specimens, the greater the improvement. The Dipfeck event was officially born in the fall of 2017 and appeared on the Reddit website. Since 2017, the number of dipfecs has increased significantly. According to Diptress researchers, the number of Deepfake videos has nearly doubled every year since 2019.

Risks associated with deepfeck

Thanks to AI, it is very easy for anyone to create DipFac without necessarily having any special technical knowledge by downloading almost basic applications like FakeApp or FaceMorpher. By analyzing facial movements, the application’s algorithm allows it to “revive” or “age”. These aren’t the wide dipfakes for creating false content that can compromise, but with the development of technology they will become more realistic, and therefore more problematic … The advent of fake news and its detrimental effects on social networks, dipfake on the web is therefore a new technology. Constitutes a threat. Manipulation, confusion, insults, defamation … the dangers of dipfek will be greater, such as making fake erotic videos (sextapes), featuring celebrities (or not) and making porn revelations (i.e. revenge porn).

A video showing New Zealand Prime Minister Jacinda Ardern smoking crystalline cocaine, for example, went viral last summer. The leader’s face was actually pressed on a YouTuber’s face in the original images shot in 2019.

To stop this, major web giants are preparing responses. Facebook scientists last June introduced a method that, thanks to artificial intelligence (AI), should allow “deepfakes” to be flushed as well as determine their source.

Microsoft last year introduced software that could help detect “dipfax” photos or videos, while Google released thousands of “dipfaxes” created by its teams in late 2019 to make them available to engineers and researchers who develop automated detection methods. Want to do. Manipulated image.

Deepfake-proof metavers

In the United States, the threat was taken seriously in the run-up to the 2018 midterm elections. Three members of the US Congress sent a letter to Detective Chief Daniel R. Coates asking him to write. This is a report of the incident. “We are very concerned that DeepFac technology may soon be replaced by corrupt foreign actors,” wrote Adam B. Schiff, Stephanie Murphy (Democrat), and Carlos Carbello (Republican) who feared “blackmail” operations or “misleading” propaganda aimed at individuals. “National security” is under threat.

In short, if today, DeepFake is already being used to spread false information / advertising, false policy declarations, pornography… this strategy would be even more damaging with the innovation that represents Metaverse. With the advancement of augmented reality technology (augmented, virtual or mixed) identity theft will be used for a meeting, a transaction or even a false presentation during a remote exam for a student on the virtual campus of a future university.

Again, the development of citizens’ digital intelligence is the only way to maintain healthy and lasting personal and professional social relationships. The same is true at the corporate and even state level.

Leave a Comment