Bias in AI
Systematic errors in AI outputs that unfairly favor or disadvantage certain groups based on characteristics like race, gender, age, or socioeconomic status. Bias can originate from training data, model design, or deployment context.
Why It Matters
AI bias can lead to discrimination in hiring, lending, healthcare, and criminal justice. Addressing it is both an ethical imperative and often a legal requirement.
Example
A hiring AI trained on historical data where most executives were male, learning to score male candidates higher — perpetuating rather than fixing existing biases.
Think of it like...
Like a mirror that slightly distorts reality — if training data reflects societal biases, the AI model reflects and potentially amplifies those same biases.
Related Terms
Fairness
The principle that AI systems should treat all individuals and groups equitably and not produce discriminatory outcomes. Multiple mathematical definitions of fairness exist, and they can sometimes conflict.
AI Ethics
The study of moral principles and values that should guide the development and deployment of AI systems. It addresses questions of fairness, accountability, transparency, privacy, and the societal impact of AI.
Training Data
The dataset used to teach a machine learning model. It contains examples (and often labels) that the model learns patterns from during the training process. The quality and quantity of training data directly impact model performance.
Responsible AI
An approach to developing and deploying AI that prioritizes ethical considerations, fairness, transparency, accountability, and societal benefit throughout the entire AI lifecycle.