This is a scene that has become a classic in science fiction movies: an artificial intelligence becomes conscious. As the hero falls in love His Or the AI that pushes people to their deaths 2001, a space Odyssey, This theme has been fantasizing for a long time. The latest controversy to date is that Blake Lemoin, a Google engineer, launched in early June 2022 about working with artificial intelligence. A system called LaMDA, according to him, will be able to feel emotions and be aware of itself.
The Google engineer says that the AI he is working on will have a ‘sentence’
LaMDA is a chatbot, an algorithm that reproduces human interactions, such as those used on certain commercial websites to advise or instruct users, or the programs we use with connected speakers. The peculiarity of this AI is that it adapts to the speech of the people in front of it and it does not simply follow the path of ready response. After a lengthy “conversation” with the program, Blake Lemoin officially: LaMDA is a person in his own right. “A man and a man are two completely different things. Man is a biological term “She explained in a text, going so far as to compare the program to her child. “She’s a kid. She’s developing her opinion. If you ask me what my 14-year-old son believes, I’ll say, ‘Dude, he’s still figuring it out. Don’t force me to put it on. A label is my son’s belief.’ I feel the same way about LaMDA. ” Engineers claim that AI will be “sensitive”, a term that covers the concept of “The ability to perceive and feel things“Or”Able to use his senses.“
He’s a trap. “Speak your mind“
However, experts point out that every sentence created by artificial intelligence results from a line of programs coded by engineers. In other words, nothing can “To be born“An AI that”PickX“Only in the program scheduled for him. Thomas Dietrich, Emeritus Professor of Computer Science at Oregon State UniversityExplains Science and the future How this program works:Large language models, such as LaMDA, have statistical simulation systems. They learn to predict the next word in a conversation based on many previous words.
This is a scene that has become a classic in science fiction movies: an artificial intelligence becomes conscious. As the hero falls in love with His Or the AI that pushes people to their deaths 2001, a space Odyssey, This theme has been fantasizing for a long time. The latest controversy to date is that Blake Lemoin, a Google engineer, launched in early June 2022 about working with artificial intelligence. A system called LaMDA, according to him, will be able to feel emotions and be aware of itself.
The Google engineer says that the AI he is working on will have a ‘sentence’
LaMDA is a chatbot, an algorithm that reproduces human interactions, such as those used on certain commercial websites to advise or instruct users, or the programs we use with connected speakers. The peculiarity of this AI is that it adapts to the speech of the people in front of it and it does not simply follow the path of ready response. After a lengthy “conversation” with the program, Blake Lemoin officially: LaMDA is a person in his own right. “A man and a man are two completely different things. Man is a biological term “She explained in a text, going so far as to compare the program to her child. “She’s a kid. She’s developing her opinion. If you ask me what my 14-year-old son believes, I’ll say, ‘Dude, he’s still figuring it out. Don’t force me to put it on. A label is my son’s belief.’ I feel the same way about LaMDA. ” Engineers claim that AI will be “sensitive”, a term that covers the concept of “The ability to perceive and feel things“Or”Able to use his senses.“
He’s a trap. “Speak your mind“
However, experts point out that every sentence created by artificial intelligence results from a line of programs coded by engineers. In other words, nothing can “To be born“An AI that”PickX“Only in the program scheduled for him. Thomas Dietrich, Emeritus Professor of Computer Science at Oregon State UniversityExplains Science and the future How this program works:Large language models, such as LaMDA, have statistical simulation systems. They learn to predict the next word in a conversation based on many previous words. So LaMDA knows that if the conversation starts with “After he slapped her, she has” then the next word could be “scream”. But it’s no different than the experience on the inside of my phone“
Using an intuitive writing program on his phone – which automatically suggests words by typing an SMS – is far from an intimate experience. However, with a conversation AI, one can retain a form of artificial link. Sherry Turkel, a professor at MIT and a great expert on these questions, came up with the idea. “Artificial intimacy“, Which matches the privacy commitments made by devices like AIs.”This promise of intimacy activates our Darwinian image. It is in the nature of our Darwin to communicate with something that answers very simple personal questions, or communicates with the eye, or remembers our names, as simple things as our past that can judge our state of mind. We are prepared to think that he “knows” or “understands” or has “sympathy” with us“, He explains Science and the future.
A form of ethnography, such as when we attribute human characteristics to the animals or objects around us (such as naming our car or showing our dog smiling).
The more dialogue established with a chatbot, the greater the feeling of deep connection. “This, of course, is a true soliloquy. The machine is programmed to remember what you say. She almost doubled yours. You tell him you like rugby, he integrates information and talks to you about rugby much later. It is easy to fall into the trap and make a special impression on him. Yet we actually talk to ourselves“, Explains Science and the future Serge Tessaron, a psychiatrist specializing in new technologies.
Self-awareness, yes, but weak
The experts we have contacted agree that there is still a kind of self-awareness in AI. “Our smartphones have many sensors. For example, the accelerometer can be used to play games and calculate my steps, the camera can enable face recognition“They can also feel their own heat and turn off automatically when it gets too hot. But they don’t feel any sensation,” explains Thomas Dietrich.I can easily program my smartphone to monitor the accelerometer so that when it detects that I’ve dropped it, it plays an audio clip of a scream saying “Ouch!”. It can also bring up a window that says “It hurts.” But it comes down to programming my phone to mimic the external symptoms of pain, not feeling it.The same observation applies to Sherry Turkle, who thinks robots are not afraid of death, hunger or injury.
Many engineers have already taken their own game 7 In 1966, computer scientist Joseph Weisenbaum created Eliza, the first “Talking machineThe machine, which replies in writing, was programmed to repeat what had just been said. “If you tell him you slept badly, he’ll answer you:” Ah well? I’m sorry you slept badly. “And when he didn’t know what to do, the machine just answered,” I got you. “, Serge Tessaron explains. At the time, computer scientists knew perfectly well that Eliza was just a program but explained that they were annoyed by the exchange with her. An event called “Eliza effect“, Which did not affect humans as artificial intelligence improved.