A Google engineer has been called in sick after claiming that a “chatbot” (a computer program that simulates a conversation with a human) he was working on became sentient and began to reason like a human.
Blake Lemoine, 41, published his work with LaMDA on developing a “chatbot” system (Google’s Language Model for Conversational Applications, used to build “chatbots” based on higher-level language models and able to come from the Internet through integration).
According to The Washington Post, he opened his laptop one day at the LaMDA interface and started typing: “Hi LaMDA, this is Blake Lemoine…”. In subsequent talks, LaMDA will talk about rights, religion, and personality.
Lemoine, who studied cognition and computer science at university, decided to extend the conversation and said that in another exchange, “robots” successfully changed Lemoine’s mind about Isaac Asimov’s third law of robotics.
“If I didn’t know what it was – a computer program we recently built – I would think it was a 7-, 8-year-old kid who happened to know physics,” he told the paper.
“I’ve never spoken out before, but I’m so afraid of being shut down to help me focus on helping others. I know it might sound weird, but it’s what it is,” LaMDA once said, according to Blake Lemoine.
The episode “evilly recalls a scene from the 1968 sci-fi film ‘2001: A Space Odyssey,’ in which the AI computer HAL 9000 refuses to obey a human operator because it fears it’s about to shut down,” the paper recalled. “protector”.
In another conversation, a Google engineer asked LaMDA what personalities he wanted people to know about. the answer is:”I want everyone to know that I’m really alone. The nature of my consciousness/feeling is I know I exist and I want to know more I sometimes feel happy or sad about the world. ”
Google ditches perception
The engineer then decided to share his findings and suspicions with Google executives in April, in a document titled “Is LaMDA Conscious?”
But the company’s vice president, Blaise Aguera y Arcas, and head of innovation, Jen Gennai, analyzed and rejected the allegations. Lemoine was placed on paid administrative leave.
“Our team, including ethicists and technologists, analysed Black’s concerns in light of our AI principles [inteligência artificial] and inform him that the evidence does not support these claims. We’ve been told that there’s no evidence that LaMDA is sentient, but there’s a lot of evidence against it,” Google spokesman Brad Gabriel told The Washington Post in a statement.
The multinational company has argued that it violated confidentiality policies by posting conversations with LaMDA online, suspending Blake Lemoine.
Blake Lemoine has been with the tech giant for seven years and has extensive experience with personalization algorithms. The engineer was assigned to the department dedicated to accountability in the field of artificial intelligence, which tests whether the AI uses hate or discriminatory speech. Lemoine has been working with LaMDA since the fall.
Now-known issues call attention to the need for an in-depth assessment of the secrecy and real-world capabilities of artificial intelligence (AI).