AI bias is the underlying prejudice in data that’s used to create AI algorithms, which can ultimately result in discrimination and other social consequences.
AI Bias can creep into algorithms in several ways. AI systems learn to make decisions based on training data, which can include biased human decisions or reflect historical or social inequities, even if sensitive variables such as gender, race, or sexual orientation are removed.
Amazon stopped using a hiring algorithm after finding it favored applicants based on words like “executed” or “captured” that were more commonly found on men’s resumes, for example. Another source of bias is flawed data sampling, in which groups are over-or underrepresented in the training data.
1. Majorly, all the virtual assistants have a female voice. It is only now that some companies have understood this bias and have started giving options for male voices but since the virtual assistants came into practice, female voices are always preferred for them over any other voice. Can you think of some reasons for this?
2. If you search on Google for salons, the first few searches are mostly for female salons. This is based on the assumption that if a person is searching for a salon, in all probability it would be a female. Do you think this is a bias? If yes, then is it a Negative bias or Positive one?
Study more about AI Ethics at AI Ethics Class 10