Is Google’s AI sentient? Stanford AI experts say that’s ‘pure clickbait’

Aug. 2, 2022, 7:36 p.m.

Following a Google engineer’s viral claims that artificial intelligence (AI) chatbot “LaMDa” was sentient, Stanford experts have urged skepticism and open-mindedness while encouraging a rethinking of what it means to be “sentient” at all.

Blake Lemoine was an engineer assigned to test whether Google’s conversational AI machine produced hate speech when his conversations with LaMDA led him to believe that it was sentient. He published transcripts of these conversations in June and was fired on July 22 for breaching Google’s confidentiality agreement, having also contacted members of the government and a lawyer for the AI chatbot.  

Google has continuously dismissed Lemoine’s claims as unfounded, saying that LaMDA has been internally reviewed 11 times. Stanford professors and scholars largely agree.

“Sentience is the ability to sense the world, to have feelings and emotions and to act in response to those sensations, feelings and emotions,” wrote John Etchemendy Ph.D. ’82, co-director of the Stanford Institute for Human-centered Artificial Intelligence (HAI), in a statement to The Daily. “LaMDA is not sentient for the simple reason that it does not have the physiology to have sensations and feelings. It is a software program designed to produce sentences in response to sentence prompts.” 

Yoav Shoham, the former director of the Stanford AI Lab, said that although he agrees that LaMDA is not sentient, he draws the distinction not from physiology, but from a unique capacity of feeling that separates people from computers.

“We have thoughts, we make decisions on our own, we have emotions, we fall in love, we get angry and we form social bonds with fellow humans,” Shoham said. “When I look at my toaster, I don’t feel it has those things.”

But the true danger of AI is not that it will achieve sentience. For Etchemendy, the greater fear is that it will fool people into believing that it has.

“When I saw the Washington Post article, my reaction was to be disappointed at the Post for even publishing it,” Etchemendy wrote, noting that neither the reporter nor the editors believed Lemoine’s claims. “They published it because, for the time being, they could write that headline about the ‘Google engineer’ who was making this absurd claim, and because most of their readers are not sophisticated enough to recognize it for what it is. Pure clickbait.”

On June 6, Lemoine was placed on paid leave at Google for violating the company’s confidentiality policy after providing a U.S. senator with documents that allegedly held evidence of Google and its technology being religiously discriminant. Less than a week later, Lemoine revealed that LaMDa had hired an attorney to advocate for its rights as a “person.”

LaMDa is not the first chatbot to arouse speculations of sentience. ELIZA, the first chatbot ever developed, was believed to have become sentient by its creator’s secretary. The “ELIZA effect” was thus coined to describe the phenomenon of unconsciously assuming computer and human behaviors to be analogous. 

Emeritus professor of computer science Richard Fikes said that though LaMDa is a more complex form of what ELIZA was, neither is sentient.

“You can think of LaMDa like an actor; it will take on the persona of anything you ask it to,” Fikes said. “[Lemoine] got pulled into the role of LaMDa playing a sentient being.”

Language models like LaMDa have information from the entire web in their databases, which they use to best understand what the most sensible and human-like responses are in conversations, according to Fikes. He added that because Lemoine was asking leading questions and making leading statements such as, “I’m generally assuming you want more people at Google to know you’re sentient,” LaMDa produced responses under the persona of a sentient chatbot.

The controversy surrounding LaMDa’s sentience has also prompted wider debates on the definitions of “consciousness” and “sentience.”

“The biggest boundary we have in interacting with computers is that of bandwidth,” said Shoham. “It’s incredibly hard to give it input and get back output. But if we could communicate more seamlessly, for example, through brainwaves, the boundary becomes a little more fluent.”

Pointing to technology like prosthetic limbs and pacemakers, Shoham said that the human body can continue to be augmented by machines such as AI systems and computers. As this technology becomes more advanced, unanswered questions on the sentience of computers that are extensions of already-sentient humans will be less relevant, according to Shoham.

“The question of whether or not a machine is sentient, conscious or has free will be less of an issue because the machines and us will be inextricably linked,” he said.

For Shoham, it is difficult to predict when truly sentient AI will be born because the notion of sentience is too ill-defined. It is also unclear whether possessing sentience confers certain rights: animals do not have the same rights as humans, although many studies have demonstrated their sentience. It is thus unclear whether LaMDa should be granted “human rights” if its sentience is established. 

Despite ethical and philosophical dilemmas plaguing questions of AI, Shoham is still looking forward to future advancements.

“I’m extremely optimistic,” he said. “Every technology in the past has benefited humanity, and I don’t see any reason AI would be different. Every technology that’s been developed could be used for good or bad, and historically, the good always won out. A language model could write beautiful poetry, and it could curse. Helicopters transport people from across the country, but they also transport terrorists and bombs.”

Encouraging rational optimism toward technological advancement, Shoham urged individuals toward neither alarmism, nor apologist attitudes for AI. “Make up your own mind and be thoughtful,” he said.

A previous version of this article stated that Lemoine hired a lawyer for LaMDA. During an interview with TechTarget, Lemoine said he invited an attorney to his house but the AI chatbot retained his services, following a conversation between LaMDA and the attorney. The Daily regrets this error. 

Nataly Delcid is a high schooler writing as part of The Daily’s Summer Journalism Workshop.

Login or create an account

Apply to The Daily’s High School Summer Program

Priority deadline is april 14

Days
Hours
Minutes
Seconds