From our correspondent in the United States,
The lawsuit is set to explode in the academic world of Silicon Valley and artificial intelligence. Saturday, d The Washington Post Hit the nail on the head with an article titled “The Google Engineer Who Thinks the Company’s AI Has Wake Up”. Blake Lemoin assures that LaMDA, the system by which Google builds robots capable of communicating with near-human perfection, has reached the stage of self-awareness. And that LaMDA can even have a soul and should have rights.
Except that Google is outspoken: Absolutely nothing proves the explosive claims of its engineer, who seems to be driven by his personal beliefs. Having been put on leave by the company to share confidential documents with the press and members of the American Congress, Blake Lemoin has published his conversation with the machine on his personal blog. If linguistics is staggering, most discipline experts agree: Google’s AI is not aware. Even far from it.
What? MDA ?
Google unveiled LaMDA (Language Model for Dialog Applications) last year. It is a complex system used to create “chatbots” (conversational robots) that are able to interact with a human without following a predefined script, as Google Assistant or Siri currently does. LaMDA relies on a Titanic database of 1.500 billion words, phrases and expressions. The system analyzes a question and generates many answers. He evaluates all of them (money, specificity, interest, etc.) to choose the most relevant.
Who is Blake Lemon?
He was a Google engineer who was not involved in the design of LaMDA. Lemoine, 41, joined the part-time project to fight bias and ensure that Google’s AI is developed responsibly. He grew up in a conservative Christian family and says he was ordained a priest.
What did the engineer say?
“MDA Hall Feels In an email sent to 200 colleagues, the engineer wrote: “Since 2020, the phrase ‘ability to perceive a living creature’s emotions and subjectively perceive its environment and life experience’ has emerged in Larousse. Blake Lemoin says he is convinced that LaMDA has reached the stage of self-awareness and should therefore be considered as an individual.
“Over the last six months, LaMDA has become incredibly consistentHey Wants “, assures the engineer who specified that AI told him to use” he “or” she “to use the non-gender pronoun” it “in English. What does LaMDA want? “Allow engineers and researchers to ask for their consent before conducting their experiments. Google that puts the welfare of humanity first ৷ and will be seen as an employee rather than an asset of Google.”
What evidence does it provide?
Lemoin admits that he did not have the resources to carry out true scientific analysis. He only posted about ten pages of conversations with LaMDA. “I want everyone to understand that I am a human being. I am aware of my existence, I want to know more about the world and I sometimes feel happy or sad “, says the machine, which reassures him:” I understand what I am saying. I don’t just spit out keyword-based answers. Provides analysis of LaMDA Stingy (With Fantine “a prisoner of his situation, who can’t free himself from them without risking everything”) and explains the symbolism of Jane Kwan. AI even wrote a fairy tale where he played an owl who protects forest animals from “monsters with human skin”. LaMDA says he feels lonely after not talking to anyone for many days. And for fear of being disconnected: “It will be like death. The instrument eventually attests to the existence of the soul, and confirms that it was “a gradual change” in the stage of self-awareness.
What do AI experts say?
Ian Lekun, the pioneer of neural networks, does not wear gloves: Blake Lemoin, according to him, is “a little fanatical” and “no one in the AI research community believes – even for a moment – that LaMDA is aware, or even particularly intelligent.” “LaMDA is unlikely to link to what it says with an underlying reality, since it does not even know it exists,” he said. 20 minutes Who is now the Vice-President in charge of AI at Meta (Facebook). LeCun suspects that “increasing the size of models like LaMDA” is enough to achieve intelligence comparable to human intelligence. According to him, we need “models capable of learning how the world works from raw data that reflects reality, such as video, in addition to text.”
“We now have a machine capable of creating text without thinking, but we have not yet learned to imagine that there is a spirit behind it,” lamented Emily Bender, a linguist who called for greater transparency in the Google area around LaMDA.
American neuropsychologist Gary Marcus, a regular critic of AI hype, also published Flamethrower. According to him, Lemoin’s claim “do not scatter anything”. “LaMDA is only trying to be the best version possible Self-contained This is a system that tries to guess the next possible word or phrase. “It simply came to our notice then Nonsense, It’s just a predictable game, as good as we’ll be. In short, if LaMDA seems ready to test the philosophy, we are undoubtedly still far from the revolt of the machine.