Microsoft’s Tay chatbot returns to life but starts behaving like crazy

Microsoft’s Tay chatbot returns to life but starts behaving like crazy

Last week, Microsoft’s AI-based Twitter account Tay was born, and then suspended after some users taught it to parrot racist and other inflammatory opinions.

This morning, the Twitter account has woken up again, back from the dead and started tweeting again at around 3 in the morning EST. But it seems that the AI-based chatbot have gotten stuck in some kind of loop, retweeting itself saying “You are too fast, take a rest.”

- Advertisement -

This time, Tay has started to behave differently, embarking on a 15-minute spamming rampage. Then it got quiet again.

Software giant Microsoft has launched Tay on Wednesday to engage and entertain people through casual conversations. Tay’s artificial intelligence is designed to use a combination of public data and data developed by the company’s own staff.

But Tay also uses people’s conversations to train itself on how to deliver a personalized response. The AI-based chatbot is aimed at people aged between 18 and 24 years old, but after 16 hours of non-stop talking on any subjects ranging from Hitler to 9/11 conspiracies, the AI-based chatbot has gone quiet.

While some Twitter users may have had fun to Tay’s recent behaviour last week, the account was causing distress to some of its Twitter followers. During Tay’s quick revival, if anyone unfollowed the AI chatbot, they were bombard with a flood tweets. But some twitter followers have made drastic actions and have reported that they decided to block Tay to end the spamming rampage.

Until now it remains a mystery why the Twitter account started tweeting again or why it’s stopped tweeting. But there’s some speculation and rumors that the Twitter account has been hacked, that’s why it’s behaving differently.

According to security data scientist Russell Thomas, during Tay’s 15-minute return it sent over 4,200 tweets. Thomas said that the ‘@TestAccountInt1’ is in all these tweets, suggests that it was used in the account hacked. Those tweets are currently not visible to public, and most have been deleted.

The total tweet count did soar to over 100,000 during the height of its spam rampage, but now the count is declining, back down to 95,000.

The Tay scenario has been a huge embarrassment for Microsoft Research, the group at Microsoft responsible for it. Peter Lee, corporate VP for Microsoft Research, has already apologized for Tay’s inappropriate behavior, blaming a “coordinated attack by a subset of people” on a vulnerability spot in its AI chatbot. Lee also noted that the Tay’s experiment in China has gone well without any problem, while the result for it US counterpart was clearly different.

Picture Courtesy: Ben Franske/Wikimedia Commons

Video Courtesy: Digital Trends/

Watch the video below to learn more about the Microsoft’s AI-based Tay chatbot: