Perceptron
The simplest form of a neural network — a single neuron that takes weighted inputs, sums them, and applies an activation function to produce an output. It is the fundamental building block of neural networks.
Why It Matters
The perceptron (1957) is where neural networks began. Understanding it provides the foundation for comprehending all modern deep learning architectures.
Example
A perceptron with inputs [hours_studied, hours_slept] and weights [0.6, 0.4] computing a weighted sum and outputting pass/fail based on whether the sum exceeds a threshold.
Think of it like...
Like a simple voting system where each input gets a vote (weight), and the decision is made based on whether the total votes exceed a minimum threshold.
Related Terms
Neural Network
A computing system inspired by the biological neural networks in the human brain. It consists of interconnected nodes (neurons) organized in layers that process information and learn to recognize patterns.
Activation Function
A mathematical function applied to the output of each neuron in a neural network that introduces non-linearity. Without activation functions, a neural network would just be a series of linear transformations.
Weight
A numerical parameter in a neural network that is learned during training. Weights determine the strength of connections between neurons and collectively encode the model's knowledge.