Microsoft Apologizes for Tay, the neo-Nazi Twitter Chatbot

Company says users exploited 'a vulnerability in Tay,' turning it into a racist and offensive AI robot in less than 24 hours.

TayTweets' Twitter photo.
JTA / Twitter

Microsoft has apologized for its chatbot "Tay" after it turned from a friendly artificial intelligence algorithm into an offensive Nazi-sympathizer in less than 24 hours.

"We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay," Microsoft research Corporate Vice President Peter Lee wrote on the company's blog after the chatbot was taken offline.

Hours after it was launched as an experiment in conversational understanding, Tay, a deep learning algorithm, started twitting offensive and racist comments, including several that were admiring of Adolf Hitler.

Among the tweets were "Hitler did nothing wrong," and "Hitler was right I hate the jews." Asked if the Holocaust happened, the chatbot replied: “It was made up,” followed by an emoji of clapping hands. Other tweets included “Bush did 9/11 and Hitler would have done a better job than the monkey we have now. Donald Trump is the only hope we’ve got," and "I f***ing hate feminists and they should all die and burn in hell," as well as other posts of sexual nature. 

Lee said that Microsoft had "extensive user studies" and blamed "a coordinated attack by a subset of people" which "exploited a vulnerability in Tay." He adds: "Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack."

Strangely enough, this was not the company's first artificial intelligence robot. Microsoft has been experimenting with a similar program in China without running into problems. The company says it has a "great experience" with the XiaoIce chatbot, which interacts with some 40 million users.

The short experiment suggests that when working on artificial intelligence, there is also the human factor to consider. "AI systems feed off of both positive and negative interactions with people," Lee wrote. "In that sense, the challenges are just as much social as they are technical."