Yahoo India Web Search

Search results

  1. Tay was a chatbot that was originally released by Microsoft Corporation as a Twitter bot on March 23, 2016. It caused subsequent controversy when the bot began to post inflammatory and offensive tweets through its Twitter account, causing Microsoft to shut down the service only 16 hours after its launch. [1]

  2. Mar 24, 2016 · Less than a day after she joined Twitter, Microsoft's AI bot, Tay.ai, was taken down for becoming a sexist, racist monster. AI experts explain why it went terribly wrong.

  3. Mar 24, 2016 · Today, Microsoft had to shut Tay down because the bot started spewing a series of lewd and racist tweets. Tay was set up with a young, female persona that Microsoft's AI programmers...

  4. Mar 24, 2016 · It took less than 24 hours for Twitter to corrupt an innocent AI chatbot. Yesterday, Microsoft unveiled Tay — a Twitter bot that the company described as an experiment in "conversational...

  5. Nov 25, 2019 · UPDATE 4 JANUARY 2024: In 2016, Microsoft’s chatbot Tay—designed to pick up its lexicon and syntax from interactions with real people posting comments on Twitter—was barraged with antisocial ideas and vulgar language. Within a few hours of it landing in bad company, it began parroting the worst of what one might encounter on social media.

  6. Mar 25, 2016 · Microsoft has apologised for creating an artificially intelligent chatbot that quickly turned into a holocaust-denying racist. But in doing so made it clear Tay's views were a result of...

  7. May 10, 2023 · Microsoft's Tay AI went from a promising AI to a total catastrophe in less than a day. Here's what the company learned.