Loss Function
A mathematical function that measures how far a model's predictions are from the actual correct values. The goal of training is to minimize this loss function, making predictions as accurate as possible.
Why It Matters
Choosing the right loss function is critical — it defines what 'good' means for your model. The wrong loss function leads to a model optimizing for the wrong thing.
Example
Mean Squared Error for predicting house prices (penalizes large errors heavily), or Cross-Entropy Loss for classification tasks like spam detection.
Think of it like...
Like a scorekeeper in a game who measures how far off each player's guess is from the correct answer — the training process tries to minimize that score.
Related Terms
Gradient Descent
An optimization algorithm used to minimize the error (loss) of a model by iteratively adjusting parameters in the direction that reduces the loss most quickly. It is the primary method for training machine learning models.
Backpropagation
The primary algorithm used to train neural networks. It calculates how much each weight in the network contributed to the error, then adjusts weights backward from the output layer to reduce future errors.
Cross-Entropy
A loss function commonly used in classification tasks that measures the difference between the predicted probability distribution and the actual distribution. Lower cross-entropy means better predictions.