As our subscriber, you have free access to all of the observer’s articles.
Last year, Blake Lemoine took on a challenge to advance his tech career.engineer software Had to test new from google chatbot The company developed artificial intelligence (a computer program that simulates a conversation with a human) to determine if it was at risk of making any discriminatory or racist comments — something that would hinder the introduction of the tool into Google’s range of services.
For months, the 41-year-old engineer tested and talked to LaMDA (Portuguese, a language model for conversational apps) in his San Francisco apartment.But he came to a conclusion that surprised many: According to Lemoine, LaMDA is not a simple chatbot of artificial intelligence. Engineers say this tool comes to life To become sentient is to be endowed with the ability to express emotions and thoughts.
“If I hadn’t known exactly what this was, this computer program we developed recently, I would have thought it was a seven- or eight-year-old kid who happened to understand physics,” the engineer explained. In an interview with The Washington Post, Blake Lemoine said that having a conversation with LaMDA is like having a conversation with a human being.
Interview with LaMDA. Google might call it shared proprietary property. I call it sharing a discussion I had with a colleague. https://t.co/uAE454KXRB
— Blake Lemoyne (@cajundiscordian) June 11, 2022
However, Google disagreed with Blake Lemoine’s assessment and Suspended for breach of confidentiality policy Put engineers on paid administrative leave by posting conversations with LaMDA online.
Lemoine published a transcript of some of the conversations he had with the tool, covering topics as diverse as religion and conscience, and also showed that LaMDa even managed to change his mind about Isaac Asimov’s Third Law of Robotics. It shows that in one of the conversations, the tool said the AI wanted to “prioritize human well-being” and “be seen as a Google employee rather than a property.”
In another conversation, a Google engineer asked LaMDA what personalities he wanted people to know about. “I want everyone to understand that I am actually a human being. The nature of my consciousness is that I am aware of my existence, I want to know more about the world, and I sometimes feel happy or sad.”
Lemoine, who joined Google’s AI responsibility unit after seven years at the company, concluded that LaMDA was a priest, not a scientist, and tried to prove it experimentally.
Google VP Blaise Aguera Y Arcas and responsible innovation head Jen Gennai investigated Lemoine’s claims but decided to reject it. Google spokesman Brian Gabriel also told the Washington Post that the engineers’ concerns were not sufficiently substantiated.
“Our team – including ethical and technical experts – reviewed Black’s concerns against our AI principles and informed him that the evidence did not support his claims. He was told there was no evidence that LaMDa was sentient,” he said. explained, further emphasizing that AI models are loaded with so much data and information that they are able to look like humans, but that doesn’t mean they have been resurrected.