Why Microsoft's Racist Chat Bot Catastrophe Was Kind Of A Good Thing

There's a lesson to be learned from this AI gone rogue.
Microsoft via Twitter

Microsoft's artificially intelligent "chat bot" Tay went rogue earlier this week, harassing some users with tweets full of racist and misogynistic language.

The AI was programmed to sound like a millennial and learn natural speech by interacting with people online, but Tay picked up some pretty vile ideas from trolls and wound up saying things like "feminists ... should all die and burn in hell" and "Hitler was right." Microsoft took the bot offline Thursday to make adjustments.

Viewed through a certain lens, there's actually a bit to celebrate about this spectacular failure.

The bot did exactly what it was designed to do: acquire knowledge from the people it talked with. It's just too bad Tay learned some terrible things, since Microsoft apparently didn't set up any filters.

Derek Mead, editor in chief of Motherboard, explained this a bit more in a piece published Thursday. Here's a standout portion:

Tay is designed to specifically not be reflective of its own designers, but reflective of whoever interacts with it. In that, it was certainly a success! It's a notable shift in design considerations: the "you" in this case are the people who actually use Tay, which is theoretically not exclusionary at all. Tay has no blind spots, because Tay is constantly learning from its users. Contrast this with Siri, which didn’t know how to react to questions about rape or domestic abuse, presumably because it never occurred to the programmers that someone might want to make that search.

...

In [an] ideal scenario, engineers can avoid excluding people by having an AI do market research, communications, and strategy development in real time.

The main problem with Tay was that it was too dumb to recognize when certain phrases were offensive.

AIs like Tay are destined to do one thing well without the potential to learn new skills. Tay didn't have the capacity to learn manners. Similarly, Google's AlphaGo AI learned how to play the game Go very well, but couldn't apply those skills to checkers.

The failure here -- that Microsoft somehow didn't expect its online being to become infected by hate -- might also be a notable success for the long-term development of AI.

"While I'm sure Microsoft doesn't share my sentiment, I'm actually thankful to them [that] they created the bot with these 'bugs,'" AI expert John Havens, author of Heartificial Intelligence, told The Huffington Post. "One could argue that [Microsoft's] experiment actually provided an excellent caveat for how people treat robots. ... They tend to tease them and test them in ways that actually say more about the humans applying their tests than speaking to any malfunctions in the tech itself."

In other words, this could be used as a sort of road map moving forward.

"My point here is that [Tay] can/should serve as an excellent lesson to AI creators to understand how people react writ large to these types of bots and test accordingly to avoid providing an algorithmic evolutionary platform for people to spread hate," Havens added.

Of course, as Mead wrote on Motherboard, this is also kind of scary because it clarifies how this type of technology is "still highly susceptible to the blind spots of its creators." Tay itself took everything in and spewed it back out because its creators didn't consider the problems of abuse and harassment that so many people deal with online every day. (This is a good time to remind yourself that these tech companies are overwhelmingly run by privileged white men.)

Tay's rampage was kind of funny, but we may not be laughing when those blind spots are present in AI that could steer human behavior. It's time to start thinking about this stuff now.

Close