Emergent Behavior
Capabilities that appear in large AI models that were not explicitly trained for and were not present in smaller versions. Emergent abilities seem to appear suddenly at certain scale thresholds.
Why It Matters
Emergent behaviors are both exciting and concerning — exciting because they expand capabilities, concerning because they are unpredictable and hard to test for.
Example
GPT-3 showing the ability to do arithmetic, translate between languages, and write code — none of which were explicit training objectives. These abilities emerged from scale.
Think of it like...
Like a city developing a culture — no one planned it, but when enough diverse people interact, unique emergent properties appear that no individual could produce.
Related Terms
Scaling Laws
Empirical findings showing predictable relationships between model performance and factors like model size (parameters), dataset size, and compute budget. Performance improves as a power law with these factors.
Large Language Model
A type of AI model trained on massive amounts of text data that can understand and generate human-like text. LLMs use transformer architecture and typically have billions of parameters, enabling them to perform a wide range of language tasks.
Frontier Model
The most capable and advanced AI models available at any given time, typically characterized by the highest performance across multiple benchmarks. These models push the boundaries of AI capabilities.
Parameter
Any learnable value in a machine learning model that is adjusted during training. Parameters include weights and biases in neural networks. Model size is often described by parameter count.