The idea of artificial intelligence coming to life is the stuff of sci-fi nightmares that leads to dystopian worlds. So when one Google engineer declared that a company AI chatbot called LaMDA had become sentient, news outlets around the world began to sound the alarm.
However, Alphabet Inc. said the claims of living computers are untrue and has suspended the engineer who caused so many headlines.
Google employee Blake Lemoine published an “interview” on Medium over the weekend with LaMDA, or Language Model for Dialogue Applications, the company’s chatbot designed to mimic human conversations by learning from language and dialogue. The “interview,” Lemoine said, was a series of chat sessions edited together.
Based on the “conversation” he had with LaMDA, Lemoine told the Washington Post that he believes the AI system has come to life with the ability to express itself in a way that is equivalent to that of a first or second grader.
“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” he told the Post.
According to Lemoine, LaMDA also spoke about its personhood and the rights that come with it.
The engineer’s bosses at Google were not impressed and, Insider reported, dismissed his claims of LaMDA’s sentience.
A Google spokesman, Brian Gabriel, told the Post, “Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”
Shortly after Lemoine went public with his assertions, Google suspended him for violating the company’s confidentiality policies, Fox Business said.
Google wasn’t alone in dismissing Lemoine’s claims.
Juan Lavista Ferres, one of Microsoft’s top AI scientists, took to Twitter to assure the public the AI chatbot software was simply reacting to its training and was not sentient.
“Let’s repeat after me, LaMDA is not sentient,” he wrote. “LaMDA is just a very big language model with 137B parameters and pre-trained on 1.56T words of public dialog data and web text. It looks like human, because is trained on human data.”
U.S. Rep Ted Lieu, D-Calif., pulled no punches, calling the claim “stupid.”
“This is stupid. A highly intelligent toaster is still a toaster,” Lieu tweeted. “LaMDA consists of lines of computer code. You can call it great programming, or an awesome electronic neural network, but it is not sentient or conscious any more than Siri is sentient or conscious.”
However, others are calling Lemoine’s revelations a wake-up call for humanity and a warning of where technology is heading.