Catastrophic Forgetting
The tendency of neural networks to completely forget previously learned information when trained on new data or tasks. New learning overwrites old knowledge.
Why It Matters
Catastrophic forgetting is why you cannot simply fine-tune a model on new data without risking degradation on previous tasks. It is a fundamental challenge in AI.
Example
A model fine-tuned on medical text that forgets how to write code — the new training overwrote the neural pathways responsible for programming knowledge.
Think of it like...
Like a student who crams for a biology exam and completely forgets everything they studied for last week's history exam — new learning erases old learning.
Related Terms
Continual Learning
Training a model on new data or tasks over time without forgetting previously learned knowledge. Also called lifelong learning or incremental learning.
Fine-Tuning
The process of taking a pre-trained model and further training it on a smaller, domain-specific dataset to specialize its behavior for a particular task or domain. Fine-tuning adjusts the model's weights to improve performance on the target task.
Elastic Weight Consolidation
A technique for continual learning that identifies which weights are important for previously learned tasks and penalizes changes to those weights during new learning.