Data Science

Crowdsourcing

Using a large group of distributed workers (often through platforms like Amazon Mechanical Turk or Scale AI) to perform data annotation and labeling tasks.

Why It Matters

Crowdsourcing enables annotation at scale but introduces quality challenges. Managing crowd worker quality is a critical skill in ML data operations.

Example

Using Scale AI to distribute 100,000 image labeling tasks across thousands of workers, with quality checks and redundant labeling to ensure accuracy.

Think of it like...

Like crowdfunding but for labor — instead of one expert spending months, hundreds of workers each contribute a small piece, completing the job quickly.

Related Terms