No, Google’s AI is not sensitive

This post is part of “Periscopio”, a LinkedIn newsletter that explores future issues every week and is published in advance on the LinkedIn platform. If you want to subscribe and preview it, Find everything here

The Washington Post last Saturday quoted a Google engineer as saying that Palo Alto was suspended June 6 for violating the company’s privacy agreement. Altogether, the engineer, Blake Lemoin, aired a “private” conversation between him and an artificial intelligence chatbot that would show something quite horrible by many experts: this AI would become sensitive.

These are powerful statements from an industry expert, not just anyone. And they have been rendered after hundreds of interactions with an unprecedented, sophisticated artificial intelligence system called LaMDA. But is it true? Has this AI really become sensitive?

Is MDA sensitive?

What are we talking about?

LaMDA stands for “Language Model for Dialog Applications”. It is one of those AI systems that can respond to a written request if trained with a large amount of data.

The system has become increasingly capable of answering questions by writing in a way that is increasingly human. Last May, Google itself defined it on LaMDA’s official blog as “capable of writing on endless topics.”

Yes, but is it sensitive?

Google tried to throw water on the fire by denying the interview published in the Washington Post after the engineer’s claim. “Our team,” Big G wrote yesterday, “reviewed Blake’s concerns and advised him that the evidence does not support his claim.” A number of artificial intelligence experts have echoed this: some have strongly rejected this thesis, others have used it as an example of our tendency to identify human features on machines.

Like fighting with a mouse, so to speak.

Although this is not a joke. Such words cannot be blown away like this. And not for fear of people Elijah Sutcliffe (“AIs are becoming sensitive”), Young Harari (“AI will be able to hack people”), or Mo Goudat (“AI researchers play a role in creating God”).

The belief that Google’s AI can be sensitive is important because it reflects both our fears and our expectations about the potential of this technology.

Young Harari
Young Harari

For now, however, this is a misconception.

The development and use of advanced computer programs trained on huge amounts of data raises many ethical concerns. In some cases, however, judgments are made on what progress can be made rather than what is currently possible.

The conclusion so far, according to almost every top computer expert in the world, seems to be the same: No, far from Google’s AI sensitive. She’s even better at being like him, combining language patterns with similar things she finds in an almost endless supply of sentences.

You have to imagine it as a very powerful version of the auto-complete software on our smartphones. OK: Super Super Super Powerful. However, don’t confuse it with being sensitive.

The AI ​​developers themselves, of course, are playing charges these days, intimidating people. Their statements, partly due to the first sighting of the potential of these technologies and partly due to the shock of promoting them, have a great resonance in the media.

Google’s AI is not responsive

Last week Blaise Aguera and Arcas, Wrote in an article for The Economist, vice president of Google research, that when he started using LaMDA last year, he increasingly felt that he was talking to something intelligent. It is an understandable wonder, even a subtle fear.

Currently, however, LaMDA has gone through 11 different review processes on the principle of artificial intelligence. He also faced numerous tests on his ability to make claims based on information. Something. It’s not sensitive.

This does not preclude the need to develop artificial intelligence while respecting ethics and morality. Someone, to be honest, has created an AI directly equipped with an ethics, but that’s another matter.

The main duty of researchers, if they really care about the advancement of this technology, is not to give an ethnographic form to its expression. After all, don’t warn the public too much without being careful so that they can be able to “pull the brakes” as soon as there is a real spark of awareness.

If ever. What are you saying

Leave a Comment