Confusion Matrix
A table that summarizes the performance of a classification model by showing true positives, true negatives, false positives, and false negatives. It reveals the types of errors a model makes.
Why It Matters
The confusion matrix tells you not just how often the model is wrong but how it is wrong — information that accuracy alone cannot provide.
Example
A medical test confusion matrix showing: 90 true positives (correctly detected disease), 5 false negatives (missed disease), 8 false positives (false alarms), 897 true negatives.
Think of it like...
Like a scorecard that shows not just how many games a team won but exactly which opponents they beat and lost to — it reveals patterns in performance.
Related Terms
Accuracy
The percentage of correct predictions out of all predictions made by a model. While intuitive, accuracy can be misleading for imbalanced datasets.
Precision
Of all the items the model predicted as positive, the proportion that were actually positive. Precision measures how trustworthy the model's positive predictions are.
Recall
Of all the actually positive items in the dataset, the proportion that the model correctly identified. Recall measures how completely the model finds all relevant items.
F1 Score
The harmonic mean of precision and recall, providing a single metric that balances both. F1 scores range from 0 to 1, with 1 being perfect precision and recall.
Classification
A type of supervised learning task where the model predicts which category or class an input belongs to. The output is a discrete label rather than a continuous value.