Google will create the first common artificial intelligence, “competing” with the human mind

3 [VIDÉO] You may also like this partner content (after advertising)

DeepMind, a company specializing in artificial intelligence (part of Google), has just introduced its new artificial intelligence called “Gato”. Unlike the “classic” AIs, which specialize in a specific task, Gato is capable of performing more than 600 tasks, often much better than humans. Debate has erupted over whether this is actually the first “Generalized Artificial Intelligence” (GAI). Experts are skeptical of Deepmind’s announcement.

Artificial intelligence has positively changed many disciplines. Incredibly specialized neural networks are now able to deliver results beyond human capacity in many cases.

A major challenge for AI is the realization of Generalized Artificial Intelligence (GAI), or a system for integrating powerful artificial intelligence. Such a system must be able to understand and do anything that a human being would be able to do. So he will be able to compete with human intelligence, even to develop a certain level of consciousness. Earlier this year, Google unveiled a type of AI capable of coding like an average programmer. Recently, in this race for AI, DeepMind announced the creation of an artificial intelligence ghetto, presented as the world’s first AGI. The results are published arXiv.

An unprecedented generalist agent model

A single AI system capable of solving many tasks is nothing new. For example, Google has recently started using a system called “Unified Multitasking Model” or MUM for its search engines, which can handle text, images and videos to perform tasks ranging from research to interlingual diversity. Resources of search queries with words, and relevant images

Incidentally, Senior Vice President Prabhakar Raghavan provided a striking example of MUM using mock search queries: I have climbed Mount Adams and now I want to climb Mount Fuji next autumn, what should I do separately to prepare? “MUM Google Search has enabled Mum Adams to show the differences and similarities between Mount Adams and Mount Fuji. Diversity of approaching work and training methods.

Gatora’s guide design policy is to provide training on a variety of relevant data, including various applications such as images, text, proprioception, joint torque, button presses and more. Isolated and continuous observation and action.

To enable the processing of this multimodal data, scientists encode it in a flat sequence of “tokens”. These tokens are used to represent data in a way that Gato understands, allowing the system, for example, to figure out which words in a sentence have a grammatical meaning. These sequences are aggregated and processed by a converting neural network, commonly used in language processing. Unlike traditional neural networks, the same network, with the same weight, is used for different tasks. Indeed, subsequently, each neuron is assigned a specific weight and is therefore of a different importance. In simple terms, weight determines which data enters the network and calculates an output data.

In this presentation, GATO can be trained and sampled from a standard large-scale language model of a large number of datasets with simulated and real-world agents’ experience in addition to various natural language datasets and images. While working, Gato uses context to combine these sampled tokens to determine the form and content of responses.

An example of Gato’s execution. The system “subscribers” a sequence of pre-sampled observations and action tokens to create the next action. The new action is applied to the environment by the agent (GATO) (a game console in this image), a new set of observations is obtained, and the process is repeated. © S. Reed et al., 2022.

The results are quite different. When it comes to dialogue, the Gato GPT-3 falls far short of competing with Open AI’s text generation model. He may give wrong answers during the conversation. He replied, for example, that Marseille was the capital of France. The authors point out that this can probably be improved with more scaling.

Nevertheless, he still proved extremely capable in other fields. Its designers claim that half the time, 450 of the 604 jobs listed in the Gato research paper work better than human experts.

Example of work performed by Gato as a token sequence. © S. Reed et al., 2022.

Game over “Actually?

Some AI researchers see AGI as an existential catastrophe for humans: a “super intelligent” system that transcends human intelligence will, in the worst case scenario, replace humanity on Earth. Other experts believe that the emergence of these AGIs will not be possible in our lifetime. This is the pessimistic view that Tristan Green argued in his editorial on the site NextWeb. He explains that it is easy to mistake Gato for a real IAG. The difference, however, is that a common intelligence can learn to do new things without prior training.

The response to this article was not long in coming. On TwitterNando de Freitas, a researcher at Deepmind and a professor of machine learning at Oxford University, says the game is over. Game over ”) In a long search for generalized artificial intelligence. He added: ” It makes these models bigger, safer, more computationally efficient, sampling faster, smarter memory, more methods, innovative data, online / offline. We will get IAG by solving these challenges

Nevertheless, the authors warn against the development of these AGIs: ” Although generalist agents are still an emerging field of research, their potential impact on society calls for a thorough interdisciplinary analysis of their risks and benefits. […] General agent damage mitigation tools are relatively underdeveloped and require further research before deploying these agents.

Moreover, general agents, capable of performing tasks in the physical world, create new challenges that require new mitigation strategies. For example, physical idols can give users an agent’s ethnographic appearance, leading to misconceptions about flawed systems.

Aside from these risks of seeing the AGI tip in an operation harmful to humanity, no data currently demonstrates the ability to consistently produce solid results. This is especially so because human problems are often difficult, there is not always a single solution and for which no prior training is possible.

Tristan Green, despite the reaction of Nando de Fritas, maintains his opinion just as strictly. NextWeb : ” It is not uncommon to see a machine diversion and enchant La Copperfield, especially when you realize that the machine is no smarter than a toaster (and obviously smarter than a dumb mouse).

Whether we agree with these statements or not, or whether we are more optimistic about the development of AGI, it seems that scaling up such intelligence, competing with our human minds, is still a long way off. .

Source: arXiv

Leave a Comment