Artificial Intelligence

Synthetic Evaluation

Using AI models to evaluate other AI models, generating test cases and scoring outputs automatically. This scales evaluation beyond what human evaluation alone can achieve.

Why It Matters

Synthetic evaluation enables testing at scale — generating thousands of test cases and evaluating responses automatically, catching issues that limited human evaluation would miss.

Example

Using GPT-4 to generate 10,000 diverse test questions and then score another model's responses on correctness, helpfulness, and safety.

Think of it like...

Like using one robot to inspect the work of another robot — the inspector can work 24/7 and check far more units than a human inspector.

Related Terms