Machine Learning

ReLU

Rectified Linear Unit — the most commonly used activation function in deep learning. It outputs the input directly if positive, and zero otherwise: f(x) = max(0, x).

Why It Matters

ReLU solved the vanishing gradient problem and enabled training of much deeper networks, catalyzing the deep learning revolution.

Example

In a neuron with ReLU: input of 5 outputs 5, input of -3 outputs 0. It simply clips negative values to zero.

Think of it like...

Like a one-way valve in plumbing — it lets positive flow through unchanged but blocks anything negative.

Related Terms