Is Google's Chatbot Sentient? No, and Here’s Why

Despite the fuss about Google’s LaMDA being sentient - you don’t need to be a big AI expert to see the shortcomings of these models

Uri Eliabayev
Send in e-mailSend in e-mail
Google headquarters in Brussels, Belgium.
Google headquarters in Brussels, Belgium. Credit: Virginia Mayo/AP
Uri Eliabayev

The internet has been abuzz recently after Blake Lemoine, a software engineer at Google, asserted in a blog post that the company’s Language Model for Dialogue Applications, or LaMDA, had developed feelings and self-awareness. The claims led Google to place Lemoine on paid leave and created no small scandal.

Lemonie had been tasked with testing if the LaMDA-based chatbot generator used discriminatory language or hate speech, which he did by engaging in free conversation with it. That included asking it especially challenging questions with the goal of learning what the outer limits of its understanding were. The test sought to make sure that that Google would not be offering a model that uses antisemitic, sexist or offensive language, even examples of such terms appear in its databases. Such things have happened in the past and it would be a PR disaster (just ask Microsoft).

But, in tandem to the standard tests, Lemoine sought to challenge the system a little more by asking the bot more philosophical questions. He wanted to learn if it thinks it has consciousness, feelings, emotions and sentience. He was very much surprised by the responses and decided to make them public with much fanfare with a post on his Medium page - as well as in a letter to The Washington Post.

Lemoine used these to provide lengthy and impressive dialogues between himself and the bot, which do indeed create the impression that the LaMDA bot has a rich internal life, some form of consciousness and even the same self-awareness to themselves as human beings – something of a hybrid between Pinnchio and a Google Assistant or Siri.

That opened the doors to countless posts on social networks, including philosophical discussions on the questions of whether artificial intelligence has developed a consciousness, if we humans, will soon become superfluous?

I will not waste your time in avoiding an answer. It is no. Google thought the same and quickly moved to pour cold water on the excitement Lemoine had created about the company’s sophisticated bot.

The point is that you don’t need to be a big AI expert to see the weaknesses and inconsistencies in these models. There is a lot of evidence online of various kinds of manipulation that can be done by constructing very specific sentences on the same language models. It’s enough to make a small change to the wording of a question for the bot to be convinced that it’s a dig, a rock or a toaster with a consciousness. And, of course, before we enter into the matter that “consciousness” hasn’t been defined well – it is hard to even get an unequivocal answer about it vis a vis human beings. In the movie "Westworld," this point was demonstrated beautifully.

Who am I? What am I?

The excitement surrounding Lemoine’s claims will fade away soon and we’ll go back to our human and earthly pursuits. But that is nothing but a short hiatus until the next scandal emerges and many claim, this time it's conclusive – we have developed artificial consciousness! But there are several things that are important to understand about this issue.

The critical point here is that human beings are prone to being so utterly convinced of something that they are ready to adjust reality to fit the imaginary world that they have created in their mind’s eye. Google’s bot has not developed a consciousness but is echoing the databases it draws from. The LaMDA model doesn’t deeply understand the things that it is saying – it is simply trying to satisfy its creator. LaMDA’s answers are the output of a statistical model – an output whose purpose is to optimize what its human creator wants to hear. In simpler terms: there’s a confirmation bias as it gives the tester the show he wants to see.

True, you can say the same thing vis a vis human beings, but the difference is the difference between heaven and earth: To assert that Google’s bot has consciousness is like saying that the reflection in a mirror has the same desire as the person whose image it is. Or even to claim that an actor playing Benjamin Netanyahu is also him because the actor's impressions of the former prime minister is so convincing. In this case, something that looks like a duck, swims like a duck and quacks like a duck may be no more than a really good model that has been exposed to a lot of ducks and gives us the illusion that it's a duck. But it's no duck.

The mirror stage

And after all the philosophical debates about consciousness, the message that experts in AI convey to the general public is also very important. People who are entrusted with the development (or in this case, the testing) of artificial intelligence language models must be very responsible in the way they present their findings to the public.

The cynics will say there's nothing wrong with such publicity. That's how to inflate the hype bubble surrounding AI a little bit more, in a way that will encourage investment and research and be good for business. In practice, this can lead to just the opposite, and for so many reasons: Older readers may remember that AI has not always been popular. Exaggerated false promises, among other things, have caused negative shifts in attitude that led to so-called AI winters, periods of reduced funding and interest in the field. Ultimately, we must faithfully reflect the progress and genuinely define the limits of every development in the field. The last thing we want to do is to overly inflate expectations of a breakthrough and create the false impression that we have solved problems that are still far from a genuine and comprehensive resolution.

Another, and perhaps most important, cause is how we manage the expectations of the non-technological audience that get exposed to blog posts like the one Lemoine wrote. They lack the tools to push back against his claims and critically engage with them, and can thus neither confirm nor disprove his arguments. As a result, they have created a false narrative around the actual capabilities of AI, and in some cases also antagonism to the changes that this technology entails – including the positive changes, that will bring humanity to a better place. If we begin now to "lie" about the capabilities of AI, when we reach genuine breakthroughs and want the general public to step up to embrace them, they will just look at us with disgust, roll their eyes and refuse to believe: the digital version of the boy who cried wolf.

The imitation game

Anyone who wants to understand in depth the illusions of artificial intelligence can look at the illusions of human intelligence, particularly in the case of children. Nearly every evening my 3-year-old son sits down and reads to us out loud, very fluently, the beloved children's book "Ayeh Pluto" ("Where is Pluto") by the late poet Lea Goldberg. He pronounces every word correctly, he turns the pages at the right moment and even changes his expressions to suit the story. Needless to say, he does not in fact know how to read.

In the past few months we read him this book countless times and he simply memorized it, together with all of the accompanying storytelling bells and whistles. As in every artificial intelligence model, this AI has been exposed to "data" enough times to put on a very convincing show of genuinely being able to read. Reading, as opposed to consciousness, is very easy to define and measure, so it is clear to all of us how this "trick" works. If I were a slightly less responsible parent, I could have starred in newspaper headlines with a special story about the 3-year-old boy who reads an entire book without ever having learned to read.

In conclusion, we must be very skeptical about any sensational news we hear about a breakthrough in AI, particularly if it involves the claim of artificial consciousness. Machines are trained to serve us and to solve the problem we have posed to them. The latest example from Google simply presents us with a significant stride forward. There is no doubt that it is a scientific breakthrough, but we are still very far from developing artificial consciousness. As the years go by the lines separating humans from machines will blur. We will simply have to keep challenging ourselves over and over, and asking –have the lines been crossed? This time, the answer is no.

Click the alert icon to follow topics:

Comments

SUBSCRIBERS JOIN THE CONVERSATION FASTER

Automatic approval of subscriber comments.

Subscribe today and save 40%

Already signed up? LOG IN

ICYMI

The Orion nebula, photographed in 2009 by the Spitzer Telescope.

What if the Big Bang Never Actually Happened?

בן גוריון

'Strangers in My House': Letters Expelled Palestinian Sent Ben-Gurion in 1948, Revealed

AIPAC

AIPAC vs. American Jews: The Toxic Victories of the 'pro-Israel' Lobby

Bosnian Foreign Minister Bisera Turkovic speaks during a press conference in Sarajevo, Bosnia in May.

‘This Is Crazy’: Israeli Embassy Memo Stirs Political Storm in the Balkans

Hamas militants take part in a military parade in Gaza.

Israel Rewards Hamas for Its Restraint During Gaza Op

Palestinians search through the rubble of a building in which Khaled Mansour, a top Islamic Jihad militant was killed following an Israeli airstrike in Rafah, southern Gaza strip, on Sunday.

Gazans Are Tired of Pointless Wars and Destruction, and Hamas Listens to Them