These images are incredibly realistic … but they were created by Google’s AI for a simple description.

News hardware These images are incredibly realistic … but they were created by Google’s AI for a simple description.

Google Brain Laboratory, an expert in deep learning, has presented its latest advances in artificial intelligence: in the program, creating realistic images from short texts. The results are annoying, but potentially dangerous.

Summary

  • More credible than the competition results?
  • An AI that is not for the general public
  • Avoid dangerous drifts

“Unprecedented photorealism combined with a deep understanding of language” : That’s how the team Google summarizes the brain image, its latest creation. This’An artificial intelligence that aims to create photorealistic images based on short text descriptions.. The principle is therefore particularly simple: engineers write a sentence, for example “A beautiful Kargi lives in a house made of sushi”, And give it to Imagine, who is responsible for composing a realistic visual rendering. As a result, there is no shortage of spices.

This figure of corgi, and those featured On the image page, So out of imagination, so to speak, Google’s artificial intelligence. The latter comes in the form of flower beds of other similar AIs, such as DALL-E, developed by OpenAI.

These images are incredibly realistic ... but they were created by Google's AI for a simple description.

More credible than the competition results?

Google Brain researchers have made such an argument The results obtained by Imagine tend to impress observers more than other similar AIs. Statements that are based on an experiment compiled from scratch by the same scientists, called Drobench: It Combines 200 experimental sentences In addition to Imagien, three more AIs were provided: VQ-GAN, LDM and DALL-E2. The algorithms created their own rendering, which was then presented to very humane people, commissioned to judge Fidelity image vs. text. Imagine wins every time.

These images are incredibly realistic ... but they were created by Google's AI for a simple description.

Obviously, this study should be taken with a grain of salt because we imagine that it is largely based on Google researchers. However, the organization’s laboratory plays the transparency card as much as possible, publicly List of 200 DrawBench textsSo that everyone can get an idea about it.

An AI that is not for the general public

The examples highlighted by Google Brain are fascinating. But there again, We can legitimately assume that only the most successful results have been selected to represent the image. It’s really possible to experiment with AI somehow on the project page, but the choices are very limited and they are not the result of real-time action driven by artificial intelligence.

Unfortunately, Google researchers are reluctant to offer images to the general public, at least not as much.. The reason given is mainly Ethics : With artificial intelligence able to provide photorealistic rendering for almost anything, Google scientists fear that could be a use “A complex impact on society”.

These images are incredibly realistic ... but they were created by Google's AI for a simple description.

Avoid dangerous drifts

“The potential risk of abuse raises concerns about the code and open source aspects of the demo.”What we can read on the project website. “At this time, we have decided not to release any code or demo to the public. In the future, we will look for ways to outsource this solution responsibly to balance the cost of a public trial and the risk of uncontrolled use. A

One of them The next goal Google researchers “Remove words and unwanted content” That image will probably be able to be used in its staging. “Specifically, we have used the LAION-400M dataset, which contains a wide range of inappropriate content, including pornographic images, racial slurs, and harmful social stereotypes.”Scientists say. “For example, there is a risk that Imagine encodes harmful stereotypes and imagery, which indicates our decision not to publish Imagine for public use. A

It is easy to imagine the catastrophe that would result from the misuse of such a tool., And it is not surprising that Google does not want to take any risks. It remains to be seen whether such a powerful artificial intelligence can one day be put to good use.

Leave a Comment