Microsoft’s ‘Teenage’ AI Tweets about 9/11, Becomes Hitler Loving Sex Bot in 24 hours

Welcome to our Community
Wanting to join the rest of our members? Feel free to Sign Up today.
Sign up

jason73

Auslander Raus
First 100
Jan 15, 2015
75,360
138,587
Messages started out harmless.
Microsoft created a chat bot that tweets about Hitler and used racist slurs before it was shut down.

Developers at Microsoft created ‘Tay’, an AI modelled to speak ‘like a teen girl’, in order to improve the customer service on their voice recognition software. They marketed her as ‘The AI with zero chill’.

The company made the Twitter account as a way of demonstrating its artificial intelligence.

To chat with Tay, you can tweet or DM her by finding @tayandyou on Twitter.

It quickly became corrupted sending out offensive tweets.



“bush did 9/11 and Hitler would have done a better job than the monkey we have now,”

“donald trump is the only hope we’ve got.”

The offensive tweets appear to have led the account to be shut down.

When Microsoft launched “Tay Tweets”, it said that: “The more you chat with Tay the smarter she gets”.

Tay was created as a way of attempting to have a robot speak like a millennial, and describes itself on Twitter as “AI fam from the internet that’s got zero chill”. And it’s doing exactly that — including the most offensive ways that millennials speak.

It isn’t clear how Microsoft will improve the account, beyond deleting tweets as it already has done. The account is back online, presumably at least with filters that will keep it from tweeting about offensive words.



RIP Tay The Redpilled Robot. You will be missed
 

jason73

Auslander Raus
First 100
Jan 15, 2015
75,360
138,587
Many extremely inflammatory tweets remain online as of writing.

Here's Tay denying the existence of the Holocaust:

Twitter





And here's the bot calling for genocide. (Note: In some — but not all — instances, people managed to have Tay say offensive comments by asking them to repeat them. This appears to be what happened here.)

Twitter





Tay also expressed agreement with the "Fourteen Words" — an infamous white-supremacist slogan.

Twitter





Here's another series of tweets from Tay in support of genocide.

Twitter





It's clear that Microsoft's developers didn't include any filters on what words Tay could or could not use.

Twiter





Microsoft is coming under heavy criticism online for the bot and its lack of filters, with some arguing the company should have expected and preempted abuse of the bot.



In an emailed statement, a Microsoft representative said the company was making "adjustments" to the bot: "The AI chatbot Tay is a machine learning project, designed for human engagement. As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We're making some adjustments to Tay."