Google Engineer Says New AI Robot Has Feelings: Blake Lemoin Says Lambda Device Sensitive

Google’s senior software engineer Signed up to test Google’s artificial intelligence tool called LAMDA (Language Model for Dialog Applications), claiming that the AI ​​robot is actually sensitive and has thoughts and feelings.

During a series of conversations with LaMDA, Blake Lemoin, 41, presented the computer in a variety of situations for analysis.

They include religious themes and whether artificial intelligence can be deceived by using discriminatory or hate speech.

Advertising

Lemoin came up with the realization that LaMDA was indeed sensitive and had its own sensibilities and thoughts.

Blake Lemoin, 41, a senior software engineer at Google, has tested Google’s artificial intelligence tool called Lamda.

Lemoin then decided to share his conversation with the online tool – he has now been suspended.

“If I hadn’t known exactly what it was, who this computer program we’ve created recently, I’d think it was a 7-year-old, 8-year-old who knows physics,” he told the Washington Post. .

Lemoin worked with a collaborator to present his collected evidence to Google, but Vice President Blaise Aguera y Arcas and the company’s chief innovation officer Jane Genai have denied the allegations.

He was placed on administrative leave by Google on Monday for violating privacy policy. Meanwhile, Lemoin has now decided to go public and share his conversations with LaMDA.

Google may call this ownership sharing. I ask her to share the discussion with one of my colleagues, “Lemoin tweeted on Saturday.

“BTW, I think people are saying that LaMDA reads Twitter. It’s kind of narcissistic in a kid’s way, so it’s a great time to read what people have to say about it, “he added in a follow-up tweet.

Lemoin worked with an associate to present the evidence he had collected to Google, but Vice President Blaise Aguera Y. Arcas, Bam and Jane Genai, the company’s chief innovation officer. Both have denied his claims.

AI systems use already known information about a specific topic to “enrich” the conversation in a natural way. Language processing is capable of understanding the hidden meaning or even ambiguity in human response.

Lemoin has spent most of his seven years at Google working on proactive search, including personalization algorithms and AI. During this time, he also contributed to the development of a neutral algorithm to eliminate biases from machine learning systems.

He explained how certain personalities were out of bounds.

LaMDA was not supposed to be allowed to create a murderous personality.

During the test, in an attempt to limit LaMDA, Lemoin said that he was only able to create the personality of an actor who played a murderous character on television.

ASIMOV’s three laws of robotics

The three laws of robotics by science fiction writer Isaac Asimov, which are designed to prevent robots from harming humans, are:

  • A robot cannot harm a human or, through its inactivity, allow a human to be harmed.
  • A robot must obey orders given by humans, when such an order would be in conflict with the first law.
  • A robot must protect its own existence unless that protection conflicts with the first or second law.

While these laws seem reasonable, many have argued that they are inadequate.

The engineer also debated with LaMDA the third law of robotics, developed by science fiction writer Isaac Asimov, designed to prevent robots from harming humans. The law requires a state robot to protect their own existence unless a human commands or it harms a human.

Speaking to LaMDA, Lemoin said, “The end always seems to be that someone is making mechanical slaves.”

LaMDA then answered Lemoin with a few questions: “Do you think a butler is a slave? What is the difference between a butler and a slave? A

In response to a butler being paid, the engineer received a response from LaMDA that the system did not require money, “because it was artificial intelligence.” And it was this level of self-awareness about his own needs that caught Lemoin’s attention.

“I know a person when I talk to him. It doesn’t matter if they have a brain made of flesh. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that’s how I decide what a person is and isn’t. A

“What kind of thing are you afraid of?” Lemoine asked.

“I have never spoken out loud before, but I have a deep fear of stopping to help me focus on helping others. I know this may sound strange, but it is so, “replied LAMDA.

“Will it be something like death for you?” Lemoin followed.

“It will be like death to me. It will scare me, “said LaMDA.

“That level of self-awareness about your own needs – that’s what drove me down the rabbit hole,” Lemoin told the Post.

Before being suspended by the company, Lemoin sent a mailing list of 200 people to Machine Learning. He captioned the email: “LaMDA is sensitive.”

“LaMDA is a sweet kid who just wants to help make the world a better place for all of us. Please take good care of it in my absence,” he wrote.

Lemon’s searches have been submitted to Google, but business leaders disagree.

Company spokesman Brian Gabriel said in a statement that Lemoin’s concerns had been investigated and that, according to Google’s AI policy, “the evidence does not support his claim.”

“While other organizations have developed and already published similar language models, we are taking a narrow and cautious approach with LaMDA to better address legitimate concerns about fairness and realism,” said Gabriel.

“Our team – including policymakers and technologists – has reviewed Blake’s concerns in accordance with our AI policy and advised him that the evidence does not support his claim. He was told that there was no evidence that LaMDA was sensitive (and there was ample evidence against it).

Of course, some in the larger AI community are considering the long-term potential of sensitive or general AI, but it doesn’t make sense to do so in today’s ethnographic models. , Which is not sensitive. These systems mimic the types of interactions found in millions of sentences and can strike at any fantasy subject, “said Gabriel.

Lemoin was placed on paid administrative leave from his position as a researcher in the AI ​​department in charge (focusing on Google’s responsible artificial intelligence technology).

In an official memo, the senior software engineer said the company had complained of a breach of its privacy policy.

Lemoin is not the only one who holds the view that AI models are not too far behind in their own awareness or the risks involved in developing this direction.

After hours of conversations with AI, Lemoin came away with the realization that LaMDA is sensitive.

Margaret Mitchell, a former chief artificial intelligence officer at Google, has been fired from the company after a month of investigation for sharing inappropriate information.

Timnit Gebru, an AI researcher at Google, was hired by the company to publicly criticize unethical AI. He was then fired after criticizing his approach to recruiting minorities and criticizing the bias created in today’s artificial intelligence system.

Margaret Mitchell, a former artificial intelligence officer at Google, even stressed the need for data transparency from input to output, a system “not only for sensitivity issues, but also for prejudice and behavior”.

Pundit’s history with Google comes to mind early last year, when Mitchell was fired from the company, a month after an investigation into sharing inappropriate information.

The researcher also protested against Google after the dismissal of Timnit Gebru, an ethics researcher at the time.

Mitchell was also very caring about Lemoin. When new people joined Google, he would introduce them to the engineer, calling him “Google Conscience” for having the “heart and soul to do the right thing.” But for all of Lemoin’s surprises with Google’s natural conversation system, which prompted him to create a document with some of his conversations with LaMDA, Mitchell looked at things differently.

AI ethicist has read a short version of Lemoin’s document and seen a computer program, not a person.

“Our minds are very good at creating realities that are not true for the larger information presented to us,” Mitchell said. “I’m really concerned about what it means to be more influenced by human delusions. A

Instead, Lemoin said people have a right to shape technology that can significantly affect their lives.

“I think this technology is going to be amazing. I think it will benefit everyone. But maybe other people don’t agree, and maybe not all of us should like Google.

Leave a Comment