Coded Bias: A Story of Harmful Algorithms and AI Interventions

Chloe Nelson
4 min readApr 26, 2021
Photo by Markus Spiske on Unsplash

Coded Bias, directed by Shalini Kantayya, was a film that seemed to keep deep-diving further and further into the murky depths of algorithms and Artificial Intelligence. What seems much like an iceberg, much of algorithms and artificial intelligence use in the United States is not seen on the surface; much of it is kept hidden out of public view and only some of its uses can be directly observed. It’s explained further in the film how AI is based on algorithms, and that both are not monitored by any sort of ethics committee. The film starts off this issue by introducing Joy Buolamwini, a computer scientist working out of the MIT labs, and how she could not be identified by a facial recognition software that she was trying to use to create a mirror that would put her hero’s faces on her own in her reflection. Her face could only be identified after she put on an all-white face mask. This led her into researching facial recognition software and the AI behind them. She found that these facial recognition technologies are inaccurate and worked most accurately for white men, leaving people of color and white women (though they are more correctly identified as people of color) to be more incorrectly identified. The film explored this theme by looking at examples of surveillance in England; one man received a fine because he chose to cover his face when walking past a facial recognition booth, and another being a 14-year-old child who was misidentified as a different person, causing him to be searched and questioned by the police. While still serious, but not as important as misidentification, AI and algorithms are explained to be used by many corporations to track users and their likes, purchases, and other interactions online. These are used by many social media companies to specify advertisements, and to even create potential customer reliability ratings that could be sold to retailers (a patent to do this was created by Facebook). The whole film showcased just how poorly monitored facial recognition, AI and algorithms are, and showcases how much of the potential convenience of facial recognition is completely overshined by the negative consequences on people’s lives caused by this technology.

One thing that I had personally never known about before was a value-added model for evaluating teachers. In the film, this model was discussed and explained as an algorithm that takes in several data components and gives a teacher rating based on that data. This algorithm and model have cost educators their jobs, tenure, and reputation. However, the actual science behind it is not shared with the teachers, leaving a very mysterious and unclear method of evaluation. One example of someone affected by this was a male middle school teacher who had received numerous teacher of the year awards and other awards for his outstanding job of teaching. Once the value-added model evaluation was used in his school, his once positive and outstanding ratings took a nosedive and made him question his teaching ability. However, it was not his ability that had changed, but instead the evaluation was flawed. This led to multiple lawsuits in school districts using this method of evaluation. I find this method extremely concerning. As a future educator myself, I worry about how an inhuman method of evaluation would hold any power over the very human job of teaching. Teaching is built upon complex, human interactions and I just wonder how anyone could even conclude that teachers could be evaluated in one, uniform way.

The most interesting and frankly disturbing case example that came from this film was Microsoft’s AI chatbot Tay, who interacted with users on Twitter with the account Taytweets. While this started out as a way for Microsoft to develop their AI and its interactions with actual people, it quickly turned into a view into the worst aspects of human society. To have the AI grow and learn, it must be sent tweets and analyze them to know what to say or how to respond in certain situations. Those who interacted with the AI chatbot shaped the way it would react, respond and the words that it would share. Within 16 hours of Tay being put into the public domain, she began using hate speech and spewed nonsense supporting sexism, anti-Semitism, racism, and many other examples of discrimination. It was shocking to see how susceptible the AI was to bias and hate, all by the guiding hands of twitter users. What is almost more disturbing, though, is that some brushed this off as “trolls’’ jokingly telling the AI this terrible ideology. Whether it was a joke or not, this is incredibly telling of how easily people will use AI and technology to share harmful information. It also makes me wonder how this bias and hate could be implemented into AI behind the scenes. Taytweets was just one public example of AI learning, and I can’t help but think of the countless more AI experiments that could be yielding the same results of hate through controlled and monitored lab training. I can’t help but think about how AI could be susceptible to certain training that could make it biased or purposefully harmful to certain types of people. This is just leaving me questioning how can we protect ourselves from harmful algorithms and AI, when we often are given no choice? Do we as a society even have the power to stop algorithms and AI being used on us? All I can say is that I hope we do.

--

--