AI learns to build itself, because it was difficult for humans to build intelligent machines, maybe we should take care of it ourselves


Uber’s artificial intelligence researcher, Rui Wang, prefers to abandon the paired open-ended trailblazer (POET) software, which helps him run overnight on his laptop. POET is a training tool for virtual robots. So far, they have not learned much. These AI agents cannot play, cancer symptoms or proteinuria. They try to navigate without falling into the rough cartoon landscape of fences and canyons.

But what robots learn is not exciting, how they learn. POET creates obstacle courses, evaluates robots’ abilities, and assigns them the next challenge, all without human intervention. No – no, robots improve by trial and error.

At times, he even jumps over mountains like a kung fu master, Mwang explains. Every day I go to my office, open my computer and don’t know what to expect. This may seem preliminary at first, but for Wang and a handful of other researchers, POET suggests a revolutionary new way to build ultra-intelligent machines: by creating AI itself.

Wang’s former colleague, Jeff Clooney, is one of the strongest proponents of this idea. Clooney has been working on it for several years, first at the University of Wyoming and later at Uber AI Labs, where he collaborated with Wang and others. He now shares his time between the University of British Columbia and OpenAI and enjoys the support of one of the world’s best artificial intelligence laboratories.

Clooney believes that the effort to create truly intelligent AI is the most ambitious scientific discovery in human history. Today, seven decades after serious AI efforts began, we are still a long way from creating smart or even smarter machines like humans. Clooney thinks POET can show a shortcut. We have to get out of the shackles and get out of our way, ”he said.

If Clune is right, using AI to create AI could be an important step in the path of one-day Artificial Intelligence (AGI), a machine capable of surpassing humans. In the short term, this strategy could also help us discover other types of intelligence: non-human intelligence that is capable of finding solutions in unexpected ways and perhaps complements our own intelligence rather than replacing it.

Clooney’s ambitious vision doesn’t just depend on OpenAI’s investment. The history of AI is replete with examples where man-designed solutions have given way to machine-educated. Take the example of computer vision: Ten years ago, great advances in image recognition were made when existing manual systems were replaced by self-learning systems. The same is true of many AI successes.

An interesting aspect of AI and especially machine learning is its ability to find solutions that humans do not have. A frequently cited example is Alfago (and its successor Alfajiro), who defeated the best of mankind in the ancient and fascinating game of Go using seemingly alien tactics.

In 2016, at the end of a game that lasted three hours and which commentators considered tight, professional player Lee Saddle, considered one of the best international players of the 2000s, bowed to the program’s attack. After hundreds of years of study by human masters, AI has found solutions that no one could have imagined.

Clune is currently working with an OpenAI team that created bots that learned to play hide and seek in a virtual environment in 2018. These AIs started with simple goals and simple tools to achieve them: one pair had to find the other, who could hide. Behind the moving barrier. Yet when these bots were released for learning, they quickly found ways to take advantage of their surroundings the way researchers did not expect.

They use their virtual world simulated physics loopholes to jump over walls and even walk through them.
This kind of unexpected emerging behavior indicates that AI may find technological solutions that people have not thought of themselves, invent new types of algorithms or more efficient neural networks, or even abandon techniques altogether. Neural networks, the cornerstone of modernity. AI

First you have to build the brain, then you have to teach. But the brain of the machine does not learn like us. Our brains are great at adapting to new environments and new tasks. Today’s AIs can solve problems under certain conditions, but fail when those conditions change, even slightly. This persistence hinders a more generalized AI search that can be effective in a wide range of situations, which would be a big step towards true intelligence.

According to Jane Wang, a researcher at Deepmind London, the best way to make AI more flexible is to learn this feature on your own. In other words, he wants to create an AI that not only learns specific tasks, but also learns to adapt to new situations.

For years, researchers have been trying to adapt AI. Wang thinks that allowing AI to solve the problem on its own avoids the assumption of a hand-crafted approach: “We can’t expect to find the right answer right now.” He hopes that by doing this, we will be able to learn more about how the brain works. We still don’t understand much about how humans and animals learn, he says. There are two main methods of automatically generating learning algorithms, but both start with an existing neural network and use AI to teach it.

The first method, using a repetitive neural network, invented individually by Wang and his colleagues at Deepmind and at the same time by a team from OpenAI. This type of network can be trained to activate its neurons, which, like the firing of neurons in the biological brain, code any kind of algorithm. Deepmind and OpenAI have taken advantage of this to train a repetitive neural network to create reinforcement learning algorithms, which tell an AI how to behave to achieve directed goals.

The result is that Deepmind and OpenAI systems do not learn an algorithm that solves a specific challenge, such as image detection, but learn a learning algorithm that can be applied to multiple tasks and adapt over time. It’s like the old adage about learning to fish: When a hand-made algorithm can learn a certain job, these AIs are meant to be learned by themselves. And some of them perform better than man-made ones.

The second method is that of Chelsea Finn, University of California, Berkeley and colleagues. Called meta-model-agnostic learning, or MAML, it trains one model using two machine learning processes, one embedded within the other.

Here’s how it works:

The internal process of MAML is trained on data, then examined as usual. But then the outer model takes on the functionality of the inner model, how it identifies images, for example, and uses that model’s learning algorithm to learn how to adjust to improve performance. It’s like being supervised by a group of visiting school teachers, each offering different learning strategies. The inspector examines which strategies allow the students to get the best results and modifies them accordingly.

Using these methods, researchers are creating an AI that is more powerful, more generalized and able to learn faster with less data. Finn, for example, wants a robot that has learned to walk on flat ground, with minimal additional training, to be able to walk on slopes, on grass, or while carrying loads.

Last year, Clooney and his colleagues expanded Finn’s technique to design an algorithm that learns using fewer neurons so that it does not erase what it has learned before, a major unresolved problem in machine learning known as catastrophic forgetfulness. A trained model that uses fewer neurons, called a “spurs” model, will have more unused neurons during training to assign new tasks, meaning less “used” neurons will be squashed.

Clooney found that challenging his AI to learn multiple tasks led him to create his own version of a Spurs model that performed better than the man-made model. If we want AI to build and teach itself, it must also create its own learning environment – school and textbooks and lesson plans.

Over the past year, we’ve seen a number of projects where AI has been trained from automatically generated data. Face recognition systems are trained with AI-generated faces, for example. AI also learns from each other’s training. In a recent example, two robotic arms worked together, with one learning to accept increasingly difficult block-stacking challenges, allowing the other arm to practice object grasping.

If AI starts building intelligence on its own, there is no guarantee that it will be human. Instead of teaching machines to think like humans, machines can teach people new ideas.

And you?

What is your opinion about it?

See also:

Blockchain, cybersecurity, cloud, machine learning and DioOps are among the most sought-after technologies in 2022, according to a competing report.

Footage shows an AI-powered robot blowing up a tank car, reviving fears of a proliferation of lethal autonomous weapons.

The copy used as proof of Google’s LaMDA AI sensitivity has been edited to make it easier to read, according to a note from an engineer fired by Google.

Google’s engineer fired after claiming Google’s LaMDA AI chatbot became sensitive and expressed thoughts and feelings equivalent to a human child

Leave a Comment