menu search
brightness_auto
more_vert
2 2
thumb_up_off_alt 2 like thumb_down_off_alt 0 dislike

2 Answers

more_vert
0
Machine Voices What will happen after the end of jobs?
Google Recommendations How are we going to distribute the wealth created by machines?
Search Results How do Machines affect our behaviour and interaction?
thumb_up_off_alt 0 like thumb_down_off_alt 0 dislike
more_vert
0

‍1. Amazon’s algorithm discriminated against women:

Employment is one of the most common areas for bias to manifest in modern life. Despite progress over the past couple of decades, women are still underrepresented in roles relating to STEM (science, technology, engineering and mathematics). According to Deloitte, for example, women accounted for less than a quarter of technical roles in 2020.

That wasn’t helped by Amazon’s automated recruitment system, which was intended to evaluate applicants based on their suitability for various roles. The system learned how to judge if someone was suitable for a role by looking at resumes from previous candidates. Sadly, it became biased against women in the process.

Because women had previously been underrepresented in technical roles, the AI system thought that male applicants were consciously preferred. Consequently, it penalized resumes from female applicants with a lower rating. Despite making changes, it was no surprise that Amazon eventually ditched the initiative in 2017.

2. COMPAS race bias with reoffending rates:

It’s not just gender bias that can be reflected by artificial intelligence. There are several AI bias examples relating to race too. 

The Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) predicted the likelihood that US criminals would re-offend. In 2016, ProPublica investigated COMPAS and found that the system was far more likely to say black defendants were at risk of reoffending than their white counterparts.

While it correctly predicted reoffending at a rate of around 60% for both black and white defendants, COMPAS:

  • Misclassified almost twice as many black defendants (45%) as higher risk compared to white defendants (23%)
  • Mistakenly labeled more white defendants as low risk, who then went on to reoffend – 48% white defendants compared to 28% black defendants
  • Classified black defendants as higher risk when all other variables (such as prior crimes, age, and gender) were controlled – 77% more likely than white defendants.

3. US healthcare algorithm underestimated black patients’ needs:

AI can also reflect racial prejudices in healthcare, which was the case for an algorithm used by US hospitals. Used for over 200 million people, the algorithm was designed to predict which patients needed extra medical care. It analyzed their healthcare cost history – assuming that cost indicates a person’s healthcare needs.

However, that assumption didn’t account for the different ways in which black and white patients pay for healthcare. A 2019 paper in Science explains how black patients are more likely to pay for active interventions like emergency hospital visits – despite showing signs of uncontrolled illnesses.

As a result, black patients:

  • Received lower risk scores than their white counterparts
  • Were put on par with healthier white people in terms of costs
  • Did not qualify for extra care as much as white patients with the same needs

4. ChatBot Tay shared discriminatory tweets:

While Twitter has made recent headlines due to Elon Musk’s acquisition, Microsoft’s attempt to showcase a chatbot on the platform was even more controversial. 

In 2016, they launched Tay, intended to learn from its casual, playful conversations with other users of the app.

Initially, Microsoft noted how “relevant public data” would be “modeled, cleaned and filtered”. However, within 24 hours, the chatbot was sharing tweets that were racist, transphobic and antisemitic. It learned discriminatory behavior from its interactions with users, many of whom were feeding it inflammatory messages.

thumb_up_off_alt 0 like thumb_down_off_alt 0 dislike

Related questions

thumb_up_off_alt 2 like thumb_down_off_alt 0 dislike
1 answer
thumb_up_off_alt 1 like thumb_down_off_alt 0 dislike
1 answer
thumb_up_off_alt 0 like thumb_down_off_alt 0 dislike
1 answer
thumb_up_off_alt 0 like thumb_down_off_alt 0 dislike
1 answer
thumb_up_off_alt 0 like thumb_down_off_alt 0 dislike
1 answer
thumb_up_off_alt 1 like thumb_down_off_alt 0 dislike
1 answer
thumb_up_off_alt 1 like thumb_down_off_alt 0 dislike
1 answer
thumb_up_off_alt 2 like thumb_down_off_alt 0 dislike
1 answer
thumb_up_off_alt 1 like thumb_down_off_alt 0 dislike
1 answer
thumb_up_off_alt 1 like thumb_down_off_alt 0 dislike
1 answer
...