COMPUTERS CAN ONLY FOLLOW THEIR PROGRAMMING. So, it may seem like they can’t treat people unfairly based on gender, race, age, disability, or other factors. Unfortunately, bias sneaks into technology all the time. How does it get there? And what can we do about it?
Many artificial intelligence (AI) models use machine learning to drive cars, predict weather, translate languages, and more. During a training phase, developers show the model lots of data. The AI learns to recognize patterns in these data. Then the model can find the same patterns in brand new data. For example, a model might look at many examples of different people’s faces. Then it can learn to identify faces in photos or videos.
But currently it can only learn to identify the types of faces…