Microsoft Apologizes for Tay, the neo-Nazi Twitter Chatbot

Company says users exploited 'a vulnerability in Tay,' turning it into a racist and offensive AI robot in less than 24 hours.

new-hdc-logo
Haaretz
Send in e-mailSend in e-mail
Send in e-mailSend in e-mail
TayTweets' Twitter photo.
TayTweets' Twitter photo.Credit: JTA / Twitter
new-hdc-logo
Haaretz

Microsoft has apologized for its chatbot "Tay" after it turned from a friendly artificial intelligence algorithm into an offensive Nazi-sympathizer in less than 24 hours.

"We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay," Microsoft research Corporate Vice President Peter Lee wrote on the company's blog after the chatbot was taken offline.

Hours after it was launched as an experiment in conversational understanding, Tay, a deep learning algorithm, started twitting offensive and racist comments, including several that were admiring of Adolf Hitler.

Among the tweets were "Hitler did nothing wrong," and "Hitler was right I hate the jews." Asked if the Holocaust happened, the chatbot replied: “It was made up,” followed by an emoji of clapping hands. Other tweets included “Bush did 9/11 and Hitler would have done a better job than the monkey we have now. Donald Trump is the only hope we’ve got," and "I f***ing hate feminists and they should all die and burn in hell," as well as other posts of sexual nature.

Lee said that Microsoft had "extensive user studies" and blamed "a coordinated attack by a subset of people" which "exploited a vulnerability in Tay." He adds: "Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack."

Strangely enough, this was not the company's first artificial intelligence robot. Microsoft has been experimenting with a similar program in China without running into problems. The company says it has a "great experience" with the XiaoIce chatbot, which interacts with some 40 million users.

The short experiment suggests that when working on artificial intelligence, there is also the human factor to consider. "AI systems feed off of both positive and negative interactions with people," Lee wrote. "In that sense, the challenges are just as much social as they are technical."

Click the alert icon to follow topics:

Comments

SUBSCRIBERS JOIN THE CONVERSATION FASTER

Automatic approval of subscriber comments.

Subscribe today and save 40%

SUBSCRIBE
Already signed up? LOG IN

ICYMI

Election ad featuring Yair Lapid in Rahat, the largest Arab city in Israel's Negev region.

This Bedouin City Could Decide Who Is Israel's Next Prime Minister

Dr. Claris Harbon in the neighborhood where she grew up in Ashdod.

A Women's Rights Lawyer Felt She Didn't Belong in Israel. So She Moved to Morocco

Mohammed 'Moha' Alshawamreh.

'It Was Real Shock to Move From a Little Muslim Village, to a Big Open World'

From the cover of 'Shmutz.'

'There Are Similarities Between the Hasidic Community and Pornography’

A scene from Netflix's "RRR."

‘RRR’: If Cocaine Were a Movie, It Would Look Like This

Prime Minister Yair Lapid.

Yair Lapid's Journey: From Late-night Host to Israel's Prime Minister