AI Glossary

The definitive dictionary for AI, Machine Learning, and Governance terminology. From Flash Attention to RAG — look up any term.

F

F1 Score

The harmonic mean of precision and recall, providing a single metric that balances both. F1 scores range from 0 to 1, with 1 being perfect precision and recall.

Machine Learning

Fairness

The principle that AI systems should treat all individuals and groups equitably and not produce discriminatory outcomes. Multiple mathematical definitions of fairness exist, and they can sometimes conflict.

AI Governance

Feature Engineering

The process of selecting, transforming, and creating input variables (features) from raw data to improve model performance. It requires domain knowledge to identify what information is most useful for the model.

Machine Learning

Feature Store

A centralized repository for storing, managing, and serving machine learning features. It ensures consistent feature computation between training and serving, and enables feature reuse across teams.

Data Science

Federated Analytics

Techniques for computing analytics and insights across distributed datasets without moving or centralizing the raw data. Each participant computes locally and only shares aggregated results.

Data Science

Federated Inference

Running AI model inference across multiple distributed devices or locations, rather than centralizing it in one place. Each device processes its own data locally.

Artificial Intelligence

Federated Learning

A decentralized training approach where a model is trained across multiple devices or organizations without sharing raw data. Each participant trains locally and only shares model updates.

Machine Learning

Few-Shot Learning

A technique where a model learns to perform a task from only a few examples provided in the prompt. Instead of training on thousands of examples, the model generalizes from just 2-5 demonstrations.

Artificial Intelligence

Few-Shot Prompting

A prompt engineering technique where a small number of input-output examples are provided before the actual query, demonstrating the desired format and behavior to the model.

Artificial Intelligence

Fine-Tuning

The process of taking a pre-trained model and further training it on a smaller, domain-specific dataset to specialize its behavior for a particular task or domain. Fine-tuning adjusts the model's weights to improve performance on the target task.

Machine Learning

Fine-Tuning vs RAG

The strategic decision between customizing a model's weights (fine-tuning) or providing external knowledge at inference time (RAG). Each approach has different strengths and use cases.

Artificial Intelligence

Flash Attention

An optimized implementation of the attention mechanism that reduces memory usage and increases speed by tiling the computation and avoiding materializing the full attention matrix in memory.

Artificial Intelligence

FLOPS

Floating Point Operations Per Second — a measure of computing speed that quantifies how many mathematical calculations a processor can perform each second. Used to measure AI hardware performance.

Artificial Intelligence

Foundation Model

A large AI model trained on broad data at scale that can be adapted to a wide range of downstream tasks. Foundation models serve as the base upon which specialized applications are built.

Artificial Intelligence

Frontier Model

The most capable and advanced AI models available at any given time, typically characterized by the highest performance across multiple benchmarks. These models push the boundaries of AI capabilities.

Artificial Intelligence

Function Calling

A capability where an LLM can generate structured output to invoke specific functions or APIs. The model decides which function to call and what parameters to pass based on the user's request.

Artificial Intelligence