Artificial Intelligence

Hardware Acceleration

Using specialized hardware (GPUs, TPUs, FPGAs, ASICs) to speed up AI computation compared to general-purpose CPUs. Accelerators are optimized for the specific math operations used in neural networks.

Why It Matters

Hardware acceleration has reduced AI training costs by 1000x over a decade. The competition between accelerator providers drives the pace of AI progress.

Example

Training a model in 3 days on GPUs that would take 3 years on CPUs — the same computation, but specialized hardware makes it practical.

Think of it like...

Like using a dishwasher instead of hand-washing — specialized equipment handles the specific task dramatically faster than general-purpose effort.

Related Terms