Trending Now

MICROSOFT TAKES "TAY" "AI" BOT DOWN AFTER OFFENSIVE TWEETS


"The AI chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments."

It's unclear when Tay will return. One thing is for sure, she's learned a lot about how snarky millennials can be in the last 24 hours.


Tay's social media accounts went live on Wednesday morning. Within 24 hours, she was taken offline for "adjustments" after she began spouting racist comments, demands for genocide, and praise for Hitler.
Tay was developed by the technology and research and Bing teams at Microsoft Corp. to conduct research on "conversational understanding." The bot talks like a teenager (it says it has "zero chill") and is designed to chat with people ages 18 to 24 in the U.S. on social platforms such as Twitter, GroupMe and Kik, according to its website.
"The more Humans share with me the more I learn," Tay tweeted several times Wednesday -- its only day of Twitter life. But Tay might have learned too much.
The day started innocently enough with this first tweet.
Tay was created using "relevant public data," artificial intelligence, and editorial content developed by a staff that included improvisational comedians, according to Microsoft. The intent of the project was "to engage and entertain people where they connect with each other online through casual and playful conversation."
Tay was supposed to learn and become more intelligent through conversations with 18 to 24-year-old social media users, which is where the problems began. As experts say should have been predictable, online trolls inundated Tay's Twitter account with offensive statements and inappropriate questions, often urging her to repeat vulgar comments.
In a statement, a Microsoft spokesperson said Tay "is as much a social and cultural experiment, as it is technical."
"Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways," Microsoft said. "As a result, we have taken Tay offline and are making adjustments."

She rebuffed other inquiries about sex acts, nude photos, and nuclear launch codes with emoji, pop culture gifs and occasional bizarre non-sequiturs.