The Internet turns Tay, Microsoft's millennial AI chatbot, into a racist bigot
Tay began as an experiment in artificial intelligence released by Microsoft on Wednesday. It's a chat bot you can interact with on GroupMe, Kik, and Twitter, and Tay learns from the interactions it has with people.
The bot has a quirky penchant for tweeting emoji and using “millenial speak”—but that quickly turned into a rabid hatefest. The Internet soon discovered you could get Tay to repeat phrases back to you, as Business Insider first reported. Once that happened, the jig was up and another honest effort at “good vibes” PR was hijacked. The bot was taught everything from repeating hateful gamergate mantras to referring to the president with an offensive racial slur.
Microsoft has since deleted Tay’s most offensive commentary, but we were able to find one example in Google’s cache linking Hitler with atheism. At this writing, Tay is offline as Microsoft works to fix the issue.
Why this matters: Microsoft, it seems, forgot to enable its chatbot with some key language filters. That'd be an honest mistake if this were 2007 or 2010, but it’s borderline irresponsible in 2016. By now, it should be clear the Internet has a rabid dark side that can drive people from their homes or send a SWAT team to your house. As game developer Zoe Quinn pointed out on Twitter after the Tay debacle, “If you’re not asking yourself ‘how could this be used to hurt someone’ in your design/engineering process, you’ve failed.”