AI Governance

Risk Assessment

The systematic process of identifying, analyzing, and evaluating potential risks associated with an AI system. Risk assessment considers both the likelihood and impact of potential harms.

Why It Matters

Risk assessment is required by the EU AI Act and is becoming standard practice. It determines what safeguards, testing, and monitoring an AI system needs.

Example

Evaluating a hiring AI by assessing risks of bias (high likelihood, high impact), data privacy violations (medium likelihood, high impact), and system downtime (low impact).

Think of it like...

Like an insurance underwriter assessing a building — they evaluate every potential risk, its likelihood, and its potential damage to determine appropriate protections.

Related Terms