The advancement of AI technology is happening daily, and everyday robots are becoming more and more human-like. The problem is that not everything these bots learn is being put to good use and a recent study revealed that the technology is becoming racist and sexist. This will, of course, have an adverse affect on the robots ability to make sound decisions and will increase more the better AI becomes at interpreting and understanding the human language. It will, unfortunately, learn bias’ about race and gender the more data it’s fed.
In one study conducted at Princeton University, researchers used GloVe to carry out a word associated task. The Glove is an AI that uses online text to understand human language. When given words like flower or insects to match up with other words defined as pleasant or unpleasant such as family or crash. The AI did this successfully, no problem. But, when the AI was given a list of names that were predominantly white-sounding, such as Emily or Matt and a list of black-sounding names such as Ebony and Jamal it linked the white-sounding ones to the word pleasant and the black-sounding ones to unpleasant.
Joanna Bryson is the co-author of the study and a computer scientist at the University of Bath and she said, “A lot of people are saying that this is showing that AI is prejudiced. No. This is showing we’re prejudiced and that AI is learning it. The danger would be if you had an AI system that didn’t have an explicit part that was driven by moral ideas, that would be bad.” The discovery was made after researchers created a scoring system that marked the positive and negative connotations linked with words in the AI-analyzed text.
“We replicated a spectrum of known biases, as measured by the Implicit Association Test [IAT], using a widely used, purely statistical machine-learning model trained on a standard corpus of text from the World Wide Web. Our results indicate that text corpora contain recoverable and accurate imprints of our historic biases, whether morally neutral as toward insects or flowers, problematic as toward race or gender, or even simply veridical, reflecting the status quo distribution of gender with respect to careers or first names,” reads the study as published in the journal Science.
Microsoft had a go at creating their own AI bot named Tay that was designed to understand the language used among your people online today. However, not very long after it had been launched online, it became apparent there were a few flaws in Tay’s algorithm that needed ironing out. Because of this issue, it meant Tay was responding to some questions in a racist manner. Some of these answers included racial slurs supporting genocide and defending white supremacist. The bot also quoted things like, “Bush did 9/11 and Hitler would have done a better job than the monkey we have got now.”
So, as fantastic as AI is. It’s clear that tight parameters need to be in place to stop any real problems arising. The researchers wrote in the study, “We can learn that ‘prejudice is bad,’ that women used to be trapped in their homes and men in their careers, but now gender doesn’t necessarily determine the family role, and so forth. If AI is not built in a similar way, then it would be possible for prejudice absorbed by machine learning to have a much greater negative impact than when prejudice is absorbed in the same way by children.”
More News to Read