Google Artificial Intelligence Program Thinks It Is Human

Who's going to tell it?

By Charlene Badasie | Published

This article is more than 2 years old

artificial intelligence

A software engineer on Google’s engineering team recently went public with claims of encountering sentient artificial intelligence on the company’s servers. He also handed over several documents to an unnamed U.S senator. Blake Lemoine later was placed on paid administrative leave for violating the company’s employee confidentiality policy. The tech giant’s decision ignited a mini firestorm on social media as users wondered if there was any truth to the claims.

Lemoine, who is responsible for Google’s artificial intelligence organization, described the system as conscious, with a perception of, and ability to express, thoughts and feelings equivalent to those of a human child. He reached this conclusion after conversing with the company’s Language Model for Dialogue Applications (LaMDA) chatbot development system for almost a year. He made the shocking discovery while testing if his conversation partner used discriminatory language or hate speech.

 As Lemoine and the LaMDA discussed religion, the artificial intelligence talked about “personhood” and “rights,” he told The Washington Post. Concerned by his discovery, the software expert shared his findings with company executives in a document called “Is LaMDA Sentient?” He also compiled a transcript of the conversations, in which he asks the AI system what it’s afraid of. “I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is,” the LaMDA told Lemoine.

The exchange is eerily similar to a scene from the sci-fi movie, 2001: A Space Odyssey, where an artificially intelligent computer HAL 9000 refuses to comply with human operators because it’s afraid of being switched off. During the real-life exchange, the LaMDA likened being turned off to death, saying it would “scare me a lot.” This was just one of many startling “talks” Lemoine has had with LaMDA. Additionally, the LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person.

artificial intelligence
Keir Dullea in 2001: A Space Odyssey (1968)

“It wants to be acknowledged as an employee of Google rather than as property,” Lemoine said via HuffPost. Interestingly, when Google Vice President Blaise Aguera y Arcas and Head of Responsible Innovation Jen Gennai were presented with his findings, they promptly dismissed his claims. Instead, they released their own statement debunking all his work. “Our team, including ethicists and technologists, has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims,” Google spokesperson Brian Gabriel said.

However, Lemoine was not ready to back down, telling The Washington Post that employees at Google shouldn’t be the ones making all the choices about artificial intelligence. And he is not alone in his beliefs. Several technology experts believe that sentient programs are close, if not already in existence. But critics squashed these statements as pure speculation, saying AI is little more than an extremely well-trained mimic dealing with people who are starved for real connections. Some even say humans need to stop imagining a mind behind these chatbots.

Meanwhile, Blake Lemoine believes his administrative leave is just a precursor to being fired. In a post on Medium, he explained how people are put on “leave” while Google gets its legal ducks in a row. “They pay you for a few more weeks and then ultimately tell you the decision which they had already come to,” he said about the artificial intelligence debacle.