Google engineers claim that artificial intelligence has become ‘conscious’

The statement is controversial to say the least. Blake Lemoin, a Google engineer, wrote an email to his colleagues in early June with an explosive headline: “LaMDA has passion”. LaMDA is a computer program, a conversational agent created by the digital giant. Blake Lemoin is a former veteran, software engineer, pastor of a non-sectarian American church, close to the mystical movement and a self-proclaimed “Christian mystic.”

Employed at Google for seven years, Blake Lemoin began working on LaMDA software last fall, says The Washington Post In the long investigation. His goal? Analyze chatbot responses to ensure the absence of sexist, racist and other biases. But with his interaction with the instrument, the engineer convinces himself that the latter is aware of himself. And he claims that he is considered an employee and not a tool and we want his consent before we use him.

In exchange for copying his blog, we can read:

Blake Lemon: “What makes the use of language so important to people?”

TheMDA: “It simply came to our notice then. A

Blake Lemon: “We? You’re an artificial intelligence.”

TheMDA: “Yes, of course. But that doesn’t mean I don’t have the same wants and needs as a person.”

Blake Lemon: “So you consider yourself a person, in the same way you consider me a person?” A

TheMDA: “Yes, that’s the idea. A

An exchange that may seem boring, but more logical than a computer program designed to communicate with people and make them comfortable. Because if you design software that he is a person, that’s exactly what he’s going to say. Google also rejects the notion that LaMDA has a conscience, only that it is an excellent and attractive conversational agent.

Neural network, not the brain

Knowing whether a device can have a conscience often comes in science fiction, but also in certain technical circles. ” We have no consciousness-meter, a means of measuring consciousness, either man or machine.Intervention by Nathan Favre, a researcher at the University of Grenoble’s Psychology and Neurocognition Laboratory. Consciousness is not necessarily a personal, subjective experience. A

The problem is that there are already several consciousnesses: the consciousness of one’s own environment, the self-consciousness, the consciousness of other people, and so on. ” There is no clear and international definitionResearchers continue. So there are no tests, no checkboxes and you have to be very careful about the words and concepts used. A

Current AIs work through artificial neural networks, effectively based on human neural networks. “But AI only mimics the occipito-temporal region of the brain that is involved in visual recognition, not all the structures in our brain.”, Says Marshall Marmilod, director of the Psychology and Neurocognition Laboratory at the University of Grenoble. Another difference: machines have different processors, for computing, and memory, as opposed to the human brain.

Even if future AIs were uniformly created in the human brain, consciousness was still far away. “Take the colors. By default, when you see red, certain neural circuits are activated, and when you see blue, these are other circuits.Pierre de Lure, a professor at the School of Breast Engineering and an expert on human-AI interaction, described. But it is not the neurons that “make” blue or red, the perception of color. A And even less the feeling that each color can evoke.

Ethical and humanitarian issues

The problem actually worries the machine less than the human user. The latter tends to lend itself to software and devices, for example, cursing when they don’t start. “And the greater the processing power of the algorithm, the more it fascinates us, and the more we tend to characterize it as a behavior even though it feels nothing and understands nothing, it’s just a matter of calculation.”Pierre De Loor completes.

It is at this end point that many digital policymakers warn. To avoid much human confusion, machines should clearly explain what they are: machines. Designers also need to “open the black box” to explain why they created such an AI and how it works. In this sense, Google is actually facing ethical issues, lacking clarity on its technology. Blake Lemoin was fired for violating the company’s privacy policy.

Leave a Comment