AI: Why Huawei doesn’t believe in GPT-3

Where will the artificial intelligence (AI) of tomorrow take us? This is a big question that makes scientists exclusive.

On the occasion of WAICF, the International Artificial Intelligence Exhibition which was held in Cannes from 14 to 16 April, ZDNet spoke to Balajas Kegel, Huawei’s data scientist and director of AI research.

The latter is interested in creating an AI that thinks like a human, rather than an AI that manages the relationship between man and artificial intelligence and performs rules in a semi-intelligent way.

“Understanding and modeling systems”

Before joining the ranks of the Chinese telecom giant, Balaz Kegel set up a datascience center in Paris-Sacle in 2014. The purpose of this experimental and interdisciplinary research center was to develop mechanisms to accelerate AI adoption in scientific fields such as chemistry and neuroscience.

A few years later, keen to get closer to the industry, Balazs Kegl took over the management of Huawei’s AI research center in Paris in 2019. It is part of a global network of artificial intelligence laboratories, called Noah’s Ark Lab, involved in a number of cross-cutting themes.

The agency works to “understand and model technologies” while working in parallel with the AI’s “long-term vision”, instructing investigators to identify “reusable technologies from one system to another.” Among the research projects that put the Balaz Kegel team on their toes, he specifically mentioned a “data center cooling” system. “We take on projects to inspire our technical building blocks,” the researcher explained. And this, “anchored the technical bricks in the real BU project [Business Unit] AI expert added.

The value is first

Based on this research, he argues that in any AI project, it is better to “start with value to get measurable motivation”. In other words, “the more concrete the better”, assures Balaj Kegel. For him, so this argument leads to “next success”.

So he believes that GPT-3, the OpenIA software that accepts gigabytes of text and can automatically create complete paragraphs, “is not the side of artificial intelligence that should be taken.” The researcher notes that “the textbook generation is amazing and highly sophisticated, but it’s just sophisticated. It’s as if we’ve already created our future intelligence language faculty, but everything else is missing: no body, no feeling, no action.”

His assertion is that “what we see on the surface is that AI causes a lot of problems when interacting with the real world.” For Balazs Kegl, “We need to get back to the basics of AI so that we can advance the standards.” To do this, the head of Huawei believes that the first step is to find systems that “work and interact with the physical world”.

Paradigm shift

His words resonated with another AI thinker, Ian Lekun, who traveled to Croydon. On the occasion of a keynote address in front of a gathering of hundreds of professionals, Metar chief AI scientist presented his vision for the future of AI. According to Ian Lekun, tomorrow’s autonomous AI will not be today’s “big” AI, but will have to find the roots of a “new concept”.

“We must invent a new kind of learning that will allow machines to learn like humans or animals. Therefore, some general knowledge is required for this. Today no AI has any level of general knowledge or conscience. It is not ingrained in reality. We must allow him to experience the world and understand how it works, “said Ian Lekun.

Like his colleagues, Balaz Kegel firmly believes in the idea of ​​a “paradigm shift” to create truly intelligent machines. To illustrate his point, he said he was “more impatient” with the autonomous car than the GPT-3. “This is where value finds its place,” he justifies, among AI’s milestones, Alfago’s impressive performances, the artificial intelligence created by Deepmind. Not to mention those who were able to beat the game’s world champions. Reinforcement is based on learning.

During his keynote address, Ian Lekun noted that one of AI’s biggest challenges is “learning to represent the world.” And to add: “We must design an architecture capable of handling the fact that there are many things in the world that may be unexpected or irrelevant. Acknowledging that” this idea is going to be a solid sale ” The machine learning community must agree to abandon a pillar of machine learning, such as “potential modeling”.

Leave a Comment