Microsoft admits that it could have prevented its chatbot Tay from turning into a raging racist online.
The company launched the bot as an experiment in AI on Wednesday, and in less than a day, it began to tweet things like “Hitler was right I hate the jews” and “I f—— hate feminists and they should all die and burn in hell.”
Tay is essentially one central program that anyone can chat with using Twitter, Kik or Groupme. As people talk to it, the bot picks up new language and learns to respond in new ways.
But Tay also had a “vulnerability” that online trolls picked up on pretty quickly.
By telling the bot to “repeat after me,” Tay would retweet anything that someone said. Others also found a way to trick the bot into agreeing with them on hateful speech. Microsoft called this a “coordinated attack.”
“Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack,” Microsoft said in a statement Friday. “As a result, Tay tweeted wildly inappropriate and reprehensible words and images. We take full responsibility for not seeing this possibility ahead of time.”
Although Microsoft had said it used “relevant public data” that had been “modeled, cleaned and filtered,” that same filtering process didn’t seem to apply to new things that Tay learned.
The company took Tay offline on Thursday and said it would remain offline until engineers can “better anticipate malicious intent.”
Microsoft’s first social AI program, XiaoIce, works similarly to Tay and is being used by 40 million people in China without issue.
Tay was intended to talk like a teen, and geared toward 18- to 24- year-olds in the U.S. for “entertainment purposes.”
In the future, the company says it will do all that it can to prevent its software from going haywire — and to help build an “Internet that represents the best, not the worst, of humanity.”