AI Glossary
The definitive dictionary for AI, Machine Learning, and Governance terminology. From Flash Attention to RAG — look up any term.
A
Accountability
The principle that there must be clear responsibility and liability for AI system decisions and their outcomes. Someone must be answerable when AI causes harm.
AI Ethics
The study of moral principles and values that should guide the development and deployment of AI systems. It addresses questions of fairness, accountability, transparency, privacy, and the societal impact of AI.
AI Governance
The frameworks, policies, processes, and organizational structures that guide the responsible development, deployment, and monitoring of AI systems within organizations and across society.
AI Maturity Model
A framework that describes the stages of an organization's AI capability, from initial experimentation through scaled deployment to AI-driven transformation.
AI Regulation
Government rules and legislation governing the development, deployment, and use of artificial intelligence. AI regulation is rapidly evolving worldwide.
AI Risk Management
The systematic process of identifying, assessing, mitigating, and monitoring risks associated with AI systems. NIST's AI Risk Management Framework provides a comprehensive approach.
AI Safety
The research field focused on ensuring AI systems operate reliably, predictably, and without causing unintended harm. It spans from technical robustness to long-term existential risk concerns.
Alignment
The challenge of ensuring AI systems behave in ways that match human values, intentions, and expectations. Alignment aims to make AI helpful, honest, and harmless.
Audit
A systematic examination of an AI system's data, algorithms, processes, and outcomes to verify compliance, fairness, accuracy, and adherence to stated principles.
C
Catastrophic Risk
The potential for AI systems to cause large-scale, irreversible harm to society. This includes risks from misuse (bioweapons), accidents (autonomous systems), and structural disruption (mass unemployment).
Compliance
The process of ensuring AI systems meet regulatory requirements, industry standards, and organizational policies. AI compliance is becoming increasingly complex as regulations proliferate.
Constitutional AI
An alignment approach developed by Anthropic where AI models are guided by a set of principles (a 'constitution') that help them self-evaluate and improve their responses without relying solely on human feedback.
Constitutional AI Principles
The specific set of rules and values embedded in a Constitutional AI system that guide its self-evaluation and response generation. These principles define what 'good' behavior means.
Content Moderation
The process of monitoring and filtering user-generated or AI-generated content to ensure it meets platform guidelines and legal requirements. AI is increasingly used to automate content moderation.
D
Data Governance
The overall management of data availability, usability, integrity, and security in an organization. It includes policies, standards, and practices for how data is collected, stored, and used.
Data Poisoning
A security attack where malicious data is injected into a training dataset to corrupt the model's behavior. Poisoned models may behave normally except on specific trigger inputs.
Data Privacy
The right of individuals to control how their personal information is collected, used, stored, and shared. In AI, data privacy concerns arise from training data, user interactions, and model outputs.
Deep Fake
AI-generated media (especially video and audio) that convincingly depicts real people saying or doing things they never actually said or did. Created using deep learning techniques.
Differential Privacy
A mathematical framework that provides provable privacy guarantees when analyzing or learning from data. It ensures that the output of any analysis is approximately the same whether or not any individual's data is included.
Dual Use
Technology or research that can be applied for both beneficial and harmful purposes. Most AI capabilities are inherently dual-use, creating governance challenges.
E
Ethical AI
AI development practices that explicitly consider moral implications, societal impact, and human values throughout the design, development, and deployment lifecycle.
Ethical Hacking of AI
The practice of systematically testing AI systems for vulnerabilities, biases, and failure modes with the goal of improving safety and robustness before malicious actors find the same weaknesses.
EU AI Act
The European Union's comprehensive regulatory framework for artificial intelligence, establishing rules based on risk levels. It categorizes AI systems from minimal to unacceptable risk with corresponding compliance requirements.
Existential Risk
The risk that advanced AI systems could pose a threat to the long-term survival or flourishing of humanity. This is the most serious concern in the AI safety research community.
Explainability
The ability to understand and articulate how an AI model reaches its decisions or predictions. Explainable AI (XAI) makes the decision-making process transparent and comprehensible to humans.
Explainable AI
The subfield focused on making AI decision-making processes understandable to humans. XAI techniques provide insights into why a model made a specific prediction.
G
GDPR
General Data Protection Regulation — the European Union's comprehensive data protection law that gives individuals control over their personal data and imposes strict obligations on organizations handling that data.
Guardrails
Safety mechanisms and constraints built into AI systems to prevent harmful, inappropriate, or off-topic outputs. Guardrails can operate at the prompt, model, or output level.
I
Impact Assessment
A systematic evaluation of the potential effects an AI system may have on individuals, groups, and society. Impact assessments consider both positive outcomes and potential harms.
Incident Response for AI
Procedures for identifying, containing, and resolving failures or harmful behaviors in deployed AI systems. AI incident response adapts traditional IT incident management for AI-specific challenges.
Interpretability
The degree to which a human can understand the internal mechanisms and reasoning process of a machine learning model. More interpretable models allow deeper inspection of how they work.
M
Model Card
A standardized document that accompanies a machine learning model, describing its intended use, performance metrics, limitations, training data, ethical considerations, and potential biases.
Model Governance
The policies, processes, and tools for managing AI models throughout their lifecycle — from development through deployment to retirement. It ensures models remain compliant, fair, and performant.
R
Red Teaming
The practice of systematically testing AI systems by attempting to find failures, vulnerabilities, and harmful behaviors before deployment. Red teamers actively try to break the system.
Responsible AI
An approach to developing and deploying AI that prioritizes ethical considerations, fairness, transparency, accountability, and societal benefit throughout the entire AI lifecycle.
Responsible AI Framework
A structured set of principles, policies, processes, and tools that guide an organization's AI development and deployment to ensure ethical, fair, and beneficial outcomes.
Responsible Disclosure
The practice of reporting AI vulnerabilities, biases, or safety issues to the appropriate parties before making them public, giving developers time to fix issues before they can be exploited.
Responsible Scaling
A policy framework where AI developers commit to implementing specific safety measures as their models become more capable, with defined capability thresholds triggering additional safeguards.
Risk Assessment
The systematic process of identifying, analyzing, and evaluating potential risks associated with an AI system. Risk assessment considers both the likelihood and impact of potential harms.
S
Safety Evaluation
Systematic testing of AI models for harmful outputs, dangerous capabilities, and vulnerability to misuse. Safety evaluations assess risks before deployment.
Shadow AI
The use of unauthorized or unvetted AI tools by employees within an organization, without IT or security team knowledge or approval. Similar to shadow IT but specific to AI tools.
Synthetic Media
AI-generated or AI-manipulated content including images, audio, video, and text that can be difficult to distinguish from authentic content. This includes deepfakes and AI-generated voices.
T
Transparency
The principle that AI systems should operate in a way that allows stakeholders to understand how they work, what data they use, and how decisions are made.
Trustworthy AI
AI systems that are reliable, fair, transparent, private, secure, and accountable. Trustworthy AI meets both technical standards and ethical requirements for safe deployment.