Ian Lee Kun (meta): “Artificial intelligence would not be able to function without emotion.”

Yann Le Cun, 62, has been the director of artificial intelligence research at Meta (Facebook, Instagram, WhatsApp, Oculus, etc.) since 2013. The Frenchman was awarded the Turing Prize in 2019, the equivalent of the Nobel Prize in Mathematics. His colleagues Joshua Bengio and Geoffrey Hinton, for their work on deep learning (Acquiring deep knowledge)

Do you have any vision for artificial intelligence for the next ten years?

You may have seen His (She, In French), Spike Jones’ film was released in 2013. The story tells of the protagonist’s relationship with his virtual assistant Samantha, with whom he falls in love. This is where we’re going. At the moment, we don’t have the technology to make Samantha from film. We do not have the science to make such a smart machine. But finally, we want to introduce the virtual agents who live in our augmented reality glass. In everyday life, they will be able to complete our information. They may remind you to look left and right before crossing the road, if you forget to do so and if a car is approaching, you will be warned to return to the sidewalk. The virtual agent will tell you where you kept your keys and, when you go to a foreign country, the translation of your conversation will appear in real time in your glasses.

Do we really want this society?

Yes, because we are overwhelmed by the amount of information that is growing rapidly. We don’t know how to get out of it. For example, I can’t read all my emails. We will need digital assistants who will pick, choose relevant, important, fun, educational etc. The way we interact with the digital world, but with each other it will be a great source of progress. But we are not there yet. It will take years.

A Google researcher says that he has detected consciousness in an artificial intelligence, is it possible?

We are far from an artificial intelligence that has reached this level. We are missing a critical part of replicating the intelligence seen between animals and humans. Today, an alley cat has far more common sense than the most powerful intelligent system. Of course, these computer systems are impressive, especially those that communicate through text or dialogue. They have the appearance of intelligence but that is not enough. They have reasoning skills but very limited. The limit of this intelligence is that they have no experience in the real world, the physical world.

Can Artificial Intelligence Feel Emotions?

Not today, yes tomorrow. In the first months of life, you have learned to predict the physical consequences of your actions. It lets you plan, so you can predict the impact of your actions. The essence of intelligence is the ability to make predictions. So we’re going to build a machine that will eventually be able to plan actions. For this, these machines need to have a goal, a purpose: to go somewhere, for example. For us, it’s simple: open a door, walk, take the subway, we don’t have to worry about it. But at the moment, machines can’t break a job. I myself have worked on an architecture that will allow me to do this. Then, to achieve a goal, the instrument must be able to predict whether a result will be good or bad: it goes through emotion. This is why artificial intelligence will not be able to function without emotion. If one day we have an autonomous intelligence system, they will have passion. It’s a little controversial. Not everyone agrees with me. But in my opinion, it is inevitable.

What is your latest breakthrough in the field of artificial intelligence? What impact do they have on how Meta works today?

With the acceleration of translation, there has been a gigantic leap in language comprehension. This is due to the combination of techniques. In this area, research is very open, new ideas spread very fast. There has been a real revolution in the way machines are taught. We take a large text between 500 and 1000 words and replace it with 10% to 15% white markers. Clearly, this is a complete textbook for children In this way the system trains itself to understand the nature of the text. For example, “Cats are chasing … in the kitchen”: the system must find the word “mouse”.

“We will always need a human moderator to deal with the subtleties. Reasonable comments will be deleted because the software will not be able to realize second degree.

Has this affected content moderation across meta platforms?

Over the past two years, these tools have been used by Meta to translate and moderate content. Five years ago, the proportion of hate speech detected automatically by artificial intelligence systems was about 30%. Flagged by most users, then reviewed by human moderators who decided to keep them or not. Today, only 96% of hate speech is automatically deleted by artificial intelligence. This is called “pre-deletion”. The rest, i.e. less than 4%, is made up of content reported by users. Of course, there are always many more hateful posts. But their proportion has dropped drastically.

Can this system be applied to 7000 languages ​​and dialects spoken worldwide?

It works better in some languages ​​than in others. But these new methods have enabled another revolution: thanks to the same network detecting hate speech in hundreds of languages. There is no need to program the machine in a specific language.

In Burma, Facebook has been used to express hatred towards the Rohingya in particular. Why did Meta let it happen?

In Burma, moderation has long been complicated because you had to recruit Burmese-speaking people. The Burmese government doesn’t like Meta without it, so it’s a big deal. Then we had translation software that first translated from Burmese into English. It is then up to the English-speaking moderator to identify the hateful comments. Now, all this is done without going through the English language.

Has the progress of the instrument made it possible to solve the problem raised by the whistleblower Francis Haugen? This former leader has criticized Facebook for its acute lack of a moderator to speak the language of each country properly …

In 2014-2015, there was relatively less content restraint. Meta has made tremendous progress in keyword identification over the past two years, especially when it comes to pedophilia. This is far from perfect, but there is a real will from the company to deal with the problem.

Today, 40,000 people worldwide work on safety and security issues for the Meta platform. Does this mean that one day we will not need another human moderator?

We will always need a human moderator to deal with the subtleties. There may be violent speech that goes through the AI ​​drop. Conversely, quite reasonable comments would be deleted in the name of a warrior metaphor, as the software would not be able to comprehend the second degree.

As he plans to buy Twitter, Elon Musk denounces the number of robots lurking behind network users. Problems with fake accounts at Meta?

In the first half of 2022, Meta deleted 1.6 billion fake accounts. In general, we have fewer robots than Twitter because Meta is primarily a network of friends whose motivation is to share, so one is interested in using one’s true identity, or at least to be recognized. There is a lot more moderation in what people say.

Leave a Comment