What is the difference between strong AI and weak AI?

Artificial Intelligence (AI) is a very common term today, but what AI is today, and what most people think of it, can be very different. The AI ​​you know is “weak” AI, but the fear of many is “strong”.

What is artificial intelligence in reality?

It’s easy to throw around a word like ‘AI’, but it doesn’t make it clear what we’re actually talking about. In general, “artificial intelligence” refers to a whole field of computing. The goal of AI is to allow computers to replicate what natural intelligence can accomplish. These include human intelligence, the intelligence of other animals, and the intelligence of non-animals, such as plants, unicellular organisms, and some intelligence.

This raises a profound question, namely, what is “intelligence” in the first place. Indeed, even the science of intelligence fails to agree on a universal definition of what intelligence is or is not.

Basically, it is the ability to learn from experience, to make decisions and to achieve goals. Intelligence makes it possible to adapt to new situations, which distinguishes it from preprogramming or instinct. The more complex the problem, the more important the intelligence.

We still have a lot to learn about human intelligence, although we have many ways to measure it. We do not know how human intelligence works. Some theories, such as Gardner’s theory of multiple intelligence, have been thoroughly rejected, while there is ample evidence to support the existence of a common intelligence factor in humans (called “G-factor”).

In other words, the description of intelligence, both natural and artificial, continues to evolve. Although we seem to recognize intelligence intuitively, it is difficult to draw a definite circle around the concept of intelligence!

The era of weak AI has come

The AI ​​we currently have is usually called “weak” or “narrative” AI. This means that a specific AI system is very good for performing tasks related to a single or a narrow set. The first computer to defeat a man in a game of chess, Deep Blue, was completely useless for everything else. Go, fast forward to the first computer to beat a man at AlphaGo, who is a few times smarter, but still just as good at one thing.

All the AIs you have encountered, used or seen today are weak Sometimes more complex systems are created by combining different narrow AI systems, but the result is still effectively narrow AI. Although these systems, especially those that focus on machine learning, can produce unexpected results, they are nothing like human intelligence.

Strong AI does not exist

An AI equivalent or higher than human intelligence does not exist outside of fiction. If you think about the AIs of movies like HAL 9000, T-800, Star Trek or Robbie the Robot, they are seemingly sensitive intelligence. They can learn to do anything, work in any situation, and do what a man can do, often better. It is “powerful” AI or AGI (artificial common intelligence), that is, an artificial entity that is at least equal to us and probably surpasses us.

As far as we know, there are no real examples of this “powerful” AI. Unless he’s somewhere in the secret lab. The thing is, we didn’t know where to build AI. We have no idea that it gives birth to human consciousness, which would be an essential emerging feature of an AI. This is called difficult problem of consciousness.

Is strong AI possible?

No one knows how to create an AI, and no one knows if it can be created. Get over here to get it over here. However, we do have evidence that strong general intelligence exists. Assuming that human consciousness and intelligence are the result of material processes subject to the laws of physics, there is no principle reason why an AGI cannot be created.

The real question is whether we are smart enough to figure out how to do it. Humans can never make enough progress to give birth to AGI and it is impossible to set a timeline for this technology, as it can be said that 16K displays will be available in a few years.

Again, our narrow AI technology and other branches of science such as genetic engineering, quantum mechanics or DNA, external computing and advanced physics can help us bridge the gap. This is all a pure guess until an accident happens or we get some roadmap.

So the question is whether we should try to create IGA. Some very smart people like the late Professor Stephen Hawking and Elon Musk believe that AGI will have catastrophic consequences.

Considering how unrealistic the AGIs seem, these concerns may be a bit of an exaggeration, but stay well to your rumbar to stay safe.

Leave a Comment