From Google's LaMDA to Microsoft's Tay, artificial intelligence models often find themselves at the centre of controversy. Here are some of the biggest AI controversies in recent times.
![]() |
| AI is an all-encompassing term for when computer systems simulate humane intelligence. (Image credit: Pixabay) |
Google’s LaMDA artificial intelligence (AI) model has been in the news because of an engineer in the company who believes that the program has become sentient. But while that claim was rubbished by the company pretty quickly, this is not the first time that an artificial intelligence program has attracted controversy; far from it in fact.
AI is an all-encompassing term for when computer systems simulate humane intelligence. In general, AI systems are trained by consuming large amounts of data while analysing it for correlations and patterns. They then use these patterns to make predictions. But sometimes, this process goes wrong, ending up in results that range from hilarious to downright horrifying. Here are some of the recent controversies surrounding artificial intelligence systems.
Google LaMDA is supposedly ‘sentient’
Even a machine would perhaps
understand that it makes sense to begin with the most recent controversy.
Google engineer Blake Lemopine was placed on administrative leave by the
company after he claimed that LaMDA had become sentient and had begun reasoning like a human
being.
“If I didn’t know exactly what it was,
which is this computer program we built recently, I’d think it was a
7-year-old, 8-year-old kid that happens to know physics. I think this
technology is going to be amazing. I think it’s going to benefit everyone. But
maybe other people disagree and maybe us at Google shouldn’t be the ones making
all the choices,” Lemoine told the Washington Post, which reported on the story
first.
Lemoine worked with a colleague to
present evidence of sentience to Google, but the company dismissed his claims.
After that, he posted what were allegedly transcripts of conversations he has
had with LaMDA in a blog post. Google dismissed his claims by speaking about
how the company prioritises the minimisation of such risks when creating
products like LaMDA.
Microsoft’s AI chatbot Tay turned
racist and sexist
In 2016, Microsoft unveiled AI chatbot Tay on Twitter. Tay was designed as an experiment in “conversational understanding.” It was designed to get smarter and smarter as it made conversations with people on Twitter. Learning from what they tweet in order to engage people better.
But soon enough, Twitter users began tweeting at Tay with all kinds of racist
and misogynistic rhetoric. And unfortunately, Tay began absorbing these
conversations before soon, the bot started coming up with its own versions of
hateful speech. In just a span of a day, its tweets went from “I am super
stoked to meet you” to “feminism is a cancer” and “hitler was
right. I hate jews”.
Predictably, Microsoft pulled the bot
from the platform pretty quickly. “We are deeply sorry for the unintended
offensive and hurtful tweets from Tay, which do not represent who we are or
what we stand for, nor how we designed Tay,” wrote Peter Lee, Microsoft’s vice
president of research, at the time of the controversy. The company later said
in a blog post that it would only bring Tay back if the engineers could find a
way to prevent Web users from influencing the chatbot in ways that undermine
the company’s principles and values.
"Tay" went from "humans are super cool" to full nazi in <24 hrs and I'm not at all concerned about the future of AI pic.twitter.com/xuGi1u9S1A
— gerry (@geraldmellor) March 24, 2016
Amazon’s Rekognition identifies US
members of Congress as criminals
In 2018, the American Civil Liberties
Union (ACLU) conducted a test on Amazon’s “Rekognition” facial recognition
program. During the test, the software incorrectly identified 28 members of
Congress as people who have previously committed crimes. Rekognition is a
face-matching program that Amazon offers to the public so that anyone can match
faces. It is used by many US government agencies.
The ACLU used Rekognition to build a
face database and search tool using 25,000 publicly available arrest photos.
They then searched that database against public photos of every member of the
US House and Senate at the time, using the default match settings that Amazon
uses. This resulted in 28 false matches.
Further, the false matches were
disproportionately people of colour including six members of the Congressional
Black Caucus. Even though only 20 per cent of members of Congress at the time
were people of colour, 39 per cent of the false matches were people of colour.
This served a a stark reminder of how AI systems can incorporate the biases
that they find in the data they are trained on.
Amazon’s secret AI recruiting tool
biased against women
In 2014, a machine learning team at
Amazon began building an AI tool that would review job applicants’ resumes with
the aim of mechanising the search for top talent, according to a Reuters
report. The idea was to create the AI holy grail of recruiting: you give the
machine 100 resumes and it selects the best 5 from it.
But as early as 2015, the team
realised that the system was rating candidates in a non-gender-neutral way. In
essence, the program began rating male candidates higher than women arbitrarily
and without reason. The reason for this is that the model was trained to sift
through applications by observing patterns in the resumes submitted to the
company over a 10-year-period.
As a reflection of the male dominance
of the tech industry, most of the resumes happened to come from men. Due to
this bias in data, the system taught itself that male candidates were
preferable. If resumes included words like “women’s,” the system penalised it.
For example, if a resume says ‘Women’s chess team.” It also downgraded
graduates of all-women colleges.
Initially, Amazon edited the programs
to make them neutral to those terms. But even that was no guarantee that the
machines would not devise other ways of sorting candidates that could prove
discriminatory. Eventually, Amazon scrapped the program. In a statement to
Reuters, the company said that it was never actually used in recruitment.
[Source: The Indian Express]

No comments:
Post a Comment