Fairness
The principle that AI systems should treat all individuals and groups equitably and not produce discriminatory outcomes. Multiple mathematical definitions of fairness exist, and they can sometimes conflict.
Why It Matters
Fairness is both an ethical requirement and increasingly a legal one (EU AI Act, US anti-discrimination laws). Unfair AI can lead to lawsuits and reputational damage.
Example
Ensuring a loan approval model has similar approval rates across racial groups, or that a facial recognition system works equally well on all skin tones.
Think of it like...
Like a referee in sports — they must apply the same rules consistently to all players regardless of which team they are on.
Related Terms
Bias in AI
Systematic errors in AI outputs that unfairly favor or disadvantage certain groups based on characteristics like race, gender, age, or socioeconomic status. Bias can originate from training data, model design, or deployment context.
AI Ethics
The study of moral principles and values that should guide the development and deployment of AI systems. It addresses questions of fairness, accountability, transparency, privacy, and the societal impact of AI.
Responsible AI
An approach to developing and deploying AI that prioritizes ethical considerations, fairness, transparency, accountability, and societal benefit throughout the entire AI lifecycle.