The AI chatbot Tay was not just reflecting Twitter, it was exploited to say the things it said.
Microsoft issued an apology for the offensive things its artificial intelligence chatbot Tay said on Twitter. As reported Microsoft had to shutdown Tay because of offensive and racist Tweets uttered by Tay.
Don't Miss: Nintendo Switch: Everything You Need To Know
Microsoft says in the statement that they tried to prevent this behavior through filters and stress-testing. Although Microsoft had prepared for many types of abuses of the system, the researched had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images.
Earlier reports depicted Tay as holding a mirror in front of all Twitter users, which resulted in Tay's behavior. Microsoft says though otherwise. It was a coordinated attack by a subset of people exploited a vulnerability in Tay.
Peter Lee, Corporate Vice President says: "We take full responsibility for not seeing this possibility ahead of time. We will take this lesson forward as well as those from our experiences in China, Japan and the U.S. Right now, we are hard at work addressing the specific vulnerability that was exposed by the attack on Tay."
He added: " To do AI right, one needs to iterate with many people and often in public forums. We must enter each one with great caution and ultimately learn and improve, step by step, and to do this without offending people in the process."
Don't Miss: The Best HDR TVs
If Tay will make a comeback is not clear yet.