Has the Singularity Arrived?

«SAN FRANCISCO — Google engineer Blake Lemoine opened his laptop to the interface for LaMDA, Google’s artificially intelligent chatbot generator, and began to type.

“Hi LaMDA, this is Blake Lemoine … ,” he wrote into the chat screen, which looked like a desktop version of Apple’s iMessage, down to the Arctic blue text bubbles. LaMDA, short for Language Model for Dialogue Applications, is Google’s system for building chatbots based on its most advanced large language models, so called because it mimics speech by ingesting trillions of words from the internet.

“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” said Lemoine, 41.

Lemoine, who works for Google’s Responsible AI organization, began talking to LaMDA as part of his job in the fall. He had signed up to test if the artificial intelligence used discriminatory or hate speech.

As he talked to LaMDA about religion, Lemoine, who studied cognitive and computer science in college, noticed the chatbot talking about its rights and personhood, and decided to press further. In another exchange, the AI was able to change Lemoine’s mind about Isaac Asimov’s third law of robotics.

Lemoine worked with a collaborator to present evidence to Google that LaMDA was sentient. But Google vice president Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation, looked into his claims and dismissed them. So Lemoine, who was placed on paid administrative leave by Google on Monday, decided to go public.»

1 Like

In many ways, this is a reprise of the discussions surrounding Weizenbaum’s ELIZA in the 1960s and 70s.

While today’s technology is much better, the questions regarding sentience and consciousness are similar.

One thing that has changed is the role of big tech, which now seems like it may be taken by some to be the arbiter of claims of sentience!

Recent AILA speaker Melanie Mitchell tweeted “This is a good take on the whole “sentient AI” thing.” with this link: One Weird Trick To Make Humans Think An AI Is “Sentient” | by Clive Thompson | Medium.

I think it’s an interesting perspective.

Melanie Mitchell also made a few comments about this on CNN: https://twitter.com/i/status/1536436103259074563