Microsoft’s failed AI experiment with chatbot Tay demonstrated how algorithms learn from humans – and adopt our society’s racism and sexism.
Microsoft’s plan had been that Tay, an artificial intelligence (AI) chatbot in the virtual form of a 19-year-old girl, would learn from interactions with users of social networks such as Twitter, Kik and Group Me. The more users talked to the AI, the more it would learn about our world.
Chatbot Tay learns hate from trolls
There was just one problem: Trolls seized the opportunity to send Tay racist, sexist and xenophopbic contents – and the algorithm learned from those tweets and shared the trolls’ opinions.
“Tay” went from “humans are super cool” to full nazi in <24 hrs and I’m not at all concerned about the future of AI pic.twitter.com/xuGi1u9S1A
— Gerry (@geraldmellor) March 24, 2016
After a while, Tay’s tweets supported violence towards minorities, denied that the Holocaust ever happened and called one female Twitter user a whore. Microsoft was left with no option but to terminate the experiment. In its statement “Learning from Tay’s introduction” , Microsoft apologised and announced that Tay would only be reactivated when Microsoft could be sure that trolls would not be able to hijack and abuse it for their malicious purposes.
The next attempt to launch Tay failed as dismally as the first. This time, the algorithms allowed the chatbot to take drugs in front of a police station.
Google Photos Tags Black People as Apes
A similar – and equally embarrassing – algorithm fail occured at Google Photos in 2015. Google Photos uses image recognition software to classify uploaded photos and suggest tags such as “people”, but also various types of objects and categories, including animal and food photos.
Google Photos, y’all fucked up. My friend’s not a gorilla. pic.twitter.com/SMkMCsNVX4
— Jacky Alciné (@jackyalcine) June 29, 2015
When Jacky Alciné scrolled through his Google Photos stream in 2015, he discovered that Google Photos had applied the tag “gorillas” to a photo that shows him and a friend, who are both black. The story leaves a bitter aftertaste of discrimination and insult, but the root cause of the mistake is as banal as it is telling: the algorithms in the image recognition software used by Google Photos had simply not been trained with photos of black people – an integral part of society had been overlooked yet again. Google itself became aware of the situation when Jacky Alciné complained about it on Twitter and Google Plus Chief Architect Yonatan Zunger promised that he and his team would find and apply a long-term fix immediately.
@jackyalcine Holy fuck. G+ CA here. No, this is not how you determine someone’s target market. This is 100% Not OK.
— Yonatan Zunger (@yonatanzunger) 29. Juni 2015
Discrimination in AdSense Algorithms
Even the adverts delivered by AdSense show which prejudices are still prevalent in our society. This is the result of research published by Harvard professor Latanya Sweeney. If you are Googling the name of a person, AdSense delivers (amongst other things) adverts for services offering personally identifiable information. This is more likely to happen in the US, where court proceedings – and thus information about people’s debts, divorces, arrests and custodial sentences – are published online in a number of states.
In her study Discrimination in Online Ad Delivery , Sweeney entered fictitious names into Google and noted what kinds of ads were delivered by AdSense. She received significantly (25%) more offers to check if the person had been arrested when she used a first name that is particularly popular in the African American community. She found similar discrepancies in the delivery of ads for high-paying jobs when she entered fictitious male and female names. When questioned, the service providers insisted that neither the ads for criminal background checks nor those for high-paying jobs had been deliberately targeted. They claimed that the algorithms were merely reacting to observed search behaviors.
You are interested how algorithms learn and if or how we can make sure, that algorithms can be taught a councious, what they should not adopt from us humans? Connect with other Tech- and IT-experts at Ada Lovelace Festival (#ada16) on October 13th. & 14th. 2016 in Berlin. You have a chance to get Super Early Bird tickets until end of May