Machine Learning

Cross-Validation

A model evaluation technique that splits data into multiple folds, trains on some folds and tests on the held-out fold, repeating so every fold serves as the test set. It provides a robust estimate of model performance.

Why It Matters

Cross-validation gives you confidence in your model's performance. A model that scores well on one test split might have gotten lucky — cross-validation averages over multiple splits.

Example

5-fold cross-validation: split 1,000 examples into 5 groups of 200, train on 4 groups and test on 1, repeat 5 times so each group gets to be the test set.

Think of it like...

Like a restaurant that tests a new dish with five different groups of diners rather than just one — the average feedback is much more reliable.

Related Terms