Bias (in AI)

Bias in AI happens when a system gives unfair or inaccurate results because of the way it was trained or the data it learned from. For example, if an AI tool that predicts job applicants’ success was trained mostly on data from female applicants, it might unintentionally favor women over men, even if both are equally qualified.

Bias can creep into AI because the data used to train it often reflects human decisions, behaviors, or social patterns, which aren’t always fair. That’s why it’s so important for developers to actively check for bias and try to correct it. If left unchecked, AI bias can lead to discrimination in areas like hiring, lending, law enforcement, or healthcare.

Comparing 0