Transfer Learning
A technique where a model trained on one task is repurposed as the starting point for a model on a different but related task. Instead of training from scratch, you leverage knowledge the model has already acquired.
Why It Matters
Transfer learning dramatically reduces the data, time, and compute needed to build AI models, making AI accessible to organizations without massive datasets.
Example
Using a model pre-trained on millions of general images as a starting point to build a specialized model that detects defects in manufacturing with only a few hundred examples.
Think of it like...
Like a professional chef who switches from French to Italian cuisine — they do not start from zero because knife skills, timing, and flavor principles all transfer.
Related Terms
Pre-training
The initial phase of training a model on a large, general-purpose dataset before specializing it for specific tasks. Pre-training gives the model broad knowledge and capabilities.
Fine-Tuning
The process of taking a pre-trained model and further training it on a smaller, domain-specific dataset to specialize its behavior for a particular task or domain. Fine-tuning adjusts the model's weights to improve performance on the target task.
Foundation Model
A large AI model trained on broad data at scale that can be adapted to a wide range of downstream tasks. Foundation models serve as the base upon which specialized applications are built.