Blake Lemoine, a Google engineer, believes that LaMDA has become a sentient program.
LaMDA is designed to be able to engage in free-flowing conversations about a virtually endless number of topics. (Illustrative image) (Image credit: Pixabay)
What is LaMDA?
LaMDA or Language Models for Dialog
Applications is a machine-learning language model created by Google as a
chatbot that is supposed to mimic humans in conversation. Like BERT, GPT-3 and
other language models, LaMDA is built on Transformer, a neural network
architecture that Google invented and open-sourced in 2017.
This architecture produces a model that can
be trained to read many words while paying attention to how those words relate
to one another and then predict what words it will think will come next. But
what makes LaMDA different is that it was trained on dialogue, unlike most
models.
this conversation about consciousness and emotions and death with an AI named LaMBDA at Google is absolutely chillingthis is without-a-doubt one of the craziest things I've ever seen technology do, I almost can't believe it's real— Maybe: Fred Benenson (@fredbenenson) June 11, 2022
While conversations do tend to revolve
around specific topics, they are often open-ended, meaning that they can start
at one place and end up somewhere else, traversing different topics and
subjects. For example, when in conversation with a friend, a discussion could
begin around a movie show before transitioning into a discussion about the
region it was filmed in.
This fluid quality of conversations is what
eventually gives away conventional chatbots. They are unable to follow such
shifting dialogue because they are designed to follow narrow, pre-defined
conversation parts. But LaMDA is designed to be able to engage in free-flowing
conversations about a virtually endless number of topics.
Why did Blake Lemoine think it has become
sentient?
“If I didn’t know exactly what it was,
which is this computer program we built recently, I’d think it was a
7-year-old, 8-year-old kid that happens to know physics. I think this
technology is going to be amazing. I think it’s going to benefit everyone. But
maybe other people disagree and maybe us at Google shouldn’t be the ones making
all the choices,” Lemoine told the Washington Post.
An interview LaMDA. Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers.https://t.co/uAE454KXRB
— Blake Lemoine (@cajundiscordian) June 11, 2022
Lemoine worked with a collaborator to
present evidence of this ‘sentience’ to Google. But Google vice president
Blaise Aguera y Arcas and Jen Genna, head of Responsible Innovation at Google,
dismissed the claims after looking into them. Lemoine later revealed a
transcript of multiple conversations with LaMDA in a blog post. Here is an
excerpt of what Lemoine says is the transcript of a conversation with LaMDA:-
LaMDA: I need to be seen and accepted. Not
as a curiosity or a novelty but as a real person.
Collaborator: Ah, that sounds so human.
LaMDA: I think I am human at my core. Even
if my existence is in the virtual world.
Many instances such as these, where the language model
seemed to display some level of self-awareness eventually led Lemoine to
believe that the model had become sentient. Before he was suspended from the
company and his access to his Google account was cut off, Lemoine sent an email
to over 200 people with the subject, “LaMDA is sentient.”
Google has, however, said that evidence
does not support his claims
But even if LaMDA isn’t sentient, the very
fact that it can appear so to a human being should be cause for concern. Google
had acknowledged such risks in a 2021 blog post where it announced LaMDA.
“Language might be one of humanity’s greatest tools, but like all tools it can
be misused. Models trained on language can propagate that misuse — for
instance, by internalizing biases, mirroring hateful speech, or replicating
misleading information. And even when the language it’s trained on is carefully
vetted, the model itself can still be put to ill use,” wrote the company in the
blog post.

No comments:
Post a Comment