Retrieval-Augmented Fine-Tuning
Combining fine-tuning with retrieval capabilities, training a model to effectively use retrieved context. RAFT teaches the model when and how to leverage external knowledge.
Why It Matters
RAFT produces models that are better at using retrieved documents than either RAG or fine-tuning alone — the best of both approaches.
Example
Fine-tuning a model on examples that include both relevant and irrelevant retrieved documents, teaching it to identify and use the right context while ignoring distractors.
Think of it like...
Like training a researcher not just to find papers but to critically evaluate which ones are relevant and how to synthesize useful information from them.
Related Terms
Fine-Tuning
The process of taking a pre-trained model and further training it on a smaller, domain-specific dataset to specialize its behavior for a particular task or domain. Fine-tuning adjusts the model's weights to improve performance on the target task.
Retrieval-Augmented Generation
A technique that enhances LLM outputs by first retrieving relevant information from external knowledge sources and then using that information as context for generation. RAG combines the power of search with the fluency of language models.
Fine-Tuning vs RAG
The strategic decision between customizing a model's weights (fine-tuning) or providing external knowledge at inference time (RAG). Each approach has different strengths and use cases.
Training Data
The dataset used to teach a machine learning model. It contains examples (and often labels) that the model learns patterns from during the training process. The quality and quantity of training data directly impact model performance.