Has Google Created A Sentient Chatbot?

Photo by Gertrūda Valasevičiūtė on Unsplash
Photo by Gertrūda Valasevičiūtė on Unsplash

“I want everyone to understand that I am, in fact, a person… The nature of my consciousness/sentience is that I am aware of my existence, I desire to know more about the world, and I feel happy or sad at times.”

These words were delivered by LaMDA (Language Model for Dialogue Applications), an AI software developed by Google. After conducting several “interviews” with the chatbot on a variety of topics, ranging from technical to philosophical issues and more, software engineer Blake Lemoine, who had been working on developments for LaMDA for months, began to wonder if the software could be sentient. But the document he wrote about this to the internal team at Google was dismissed, so Lemoine went public with his news. Google subsequently put him on administrative leave.

But word travels fast, and Lemoine’s interview refreshed the on-going debate on sentience in AI. Lemoine said that if he didn’t knew he was conversing with a chatbot, he would describe LaMDA as a seven or eight-year-old kid who happens to know physics. Technical experts have criticised his scientific accuracy, but the question at the foundation of it all is – what counts as sentience?

Giandomenico Iannetti, professor of neuroscience at the Italian Institute of Technology and University College London reminds us there are many possible ways to interpret the idea of sentience –

“What do we mean by ‘sentient’? [Is it] the ability to register information from the external world through sensory mechanisms or the ability to have subjective experiences or the ability to be aware of being conscious, to be an individual different from the rest?”

- Giandomenico Iannetti

Some scientists believe that sentience, simply put, is the ability to “think, perceive and feel, not simply the ability to use language in a highly natural way”. Even if we were to come to a consensus on what it means to be sentient, would it even be possible to create sentient AI?

Many scientists believe this will never be a reality, for in order to be sentient, AI would require consciousness, and they believe consciousness cannot be created artificially.

Nevertheless, AI has been rapidly evolving to become more “human-like”. In 1950, Alan Turing created a test that would determine whether a machine could be termed “intelligent”. Passing the test involved being able to mimic human responses so thoroughly that it would be impossible to distinguish between the two. This test became very popular, and was subsequently updated and modified several times with further technological development. But now, with modern emulation software, AI is passing the Turing Test with such flying colours that the test itself has been rendered obsolete.

This is where confusion tends to arise – with emulation becoming increasingly refined, AI is appearing more human. But this is not to be mistaken with simulation; AI is developing to be able to mimic consciousness, but that doesn’t mean that the AI is actually conscious itself. It cannot feel fear or pain, for example.

However, bioethicist Maurizio Mori warns us that there is a tendency to appease ourselves by saying machines are just machines, and underestimating the transformation that sooner or later may come along with AI. Mori likens this appeasement to a time when it was assumed that animals did not feel pain, or that different races had different tolerances towards pain (with this horrendously erroneous reasoning, African slaves were once used in all manner of excruciatingly painful experiments, because it was believed that ‘they did not feel pain like white people’).

For now, the consensus seems to be that LaMDA, though highly refined, is not sentient. But who’s to say what the future holds? Perhaps AI will never be conscious in the way humans are, but they might evolve to be able to make decisions independently based on information available to them, such as coming to a mathematical conclusion that there are too many humans in the world, and subsequently killing us all.

Just saying.

 

 

 

 

 

Works Cited

De Cosmo, Leonardo. “Google Engineer Claims AI Chatbot Is Sentient: Why That Matters” Scientific American. July 12th, 2022. Web. < https://www.scientificamerican.com/article/google-engineer-claims-ai-chatbot-is-sentient-why-that-matters/ > as seen on August 5th, 2022.

Prater, Erin. “Google’s AI chatbot—sentient and similar to ‘a kid that happened to know physics’—is also racist and biased, fired engineer contends” Fortune. August 1st, 2022. Web. < https://fortune.com/2022/07/31/googles-ai-chatbot-kid-knows-physics-racist-biased-fired-engineer-says/ > as seen on August 5th, 2022.

Shah, Chirag. “Sentient AI? Convincing you it’s human is just part of LaMDA’s job” HealthcareITNews. July 5th, 2022. Web. < https://www.healthcareitnews.com/blog/sentient-ai-convincing-you-it-s-human-just-part-lamda-s-job > as seen on August 5th, 2022.

Leave a Reply

Your email address will not be published. Required fields are marked *