Employment is one of the most common areas for bias to manifest in modern life. Despite progress over the past couple of decades, women are still underrepresented in roles relating to STEM (science, technology, engineering and mathematics). According to Deloitte, for example, women accounted for less than a quarter of technical roles in 2020.
That wasn’t helped by Amazon’s automated recruitment system, which was intended to evaluate applicants based on their suitability for various roles. The system learned how to judge if someone was suitable for a role by looking at resumes from previous candidates. Sadly, it became biased against women in the process.
Because women had previously been underrepresented in technical roles, the AI system thought that male applicants were consciously preferred. Consequently, it penalized resumes from female applicants with a lower rating. Despite making changes, it was no surprise that Amazon eventually ditched the initiative in 2017.
It’s not just gender bias that can be reflected by artificial intelligence. There are several AI bias examples relating to race too.
The Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) predicted the likelihood that US criminals would re-offend. In 2016, ProPublica investigated COMPAS and found that the system was far more likely to say black defendants were at risk of reoffending than their white counterparts.
While it correctly predicted reoffending at a rate of around 60% for both black and white defendants, COMPAS:
AI can also reflect racial prejudices in healthcare, which was the case for an algorithm used by US hospitals. Used for over 200 million people, the algorithm was designed to predict which patients needed extra medical care. It analyzed their healthcare cost history – assuming that cost indicates a person’s healthcare needs.
However, that assumption didn’t account for the different ways in which black and white patients pay for healthcare. A 2019 paper in Science explains how black patients are more likely to pay for active interventions like emergency hospital visits – despite showing signs of uncontrolled illnesses.
As a result, black patients:
While Twitter has made recent headlines due to Elon Musk’s acquisition, Microsoft’s attempt to showcase a chatbot on the platform was even more controversial.
In 2016, they launched Tay, intended to learn from its casual, playful conversations with other users of the app.
Initially, Microsoft noted how “relevant public data” would be “modeled, cleaned and filtered”. However, within 24 hours, the chatbot was sharing tweets that were racist, transphobic and antisemitic. It learned discriminatory behavior from its interactions with users, many of whom were feeding it inflammatory messages.