Back to Academy
Relevance 9/10Prompting and EvaluationIntermediate6 min read

Rubric-Based Evaluation

Rubric-based evaluation scores outputs across clear dimensions such as correctness, safety, and completeness.

Why it matters for annotators

Rubrics reduce subjective drift and make quality decisions auditable.

Visual mental model

Output -> dimension scores -> final judgment.

Examples (bad vs good)

Scenario: Real annotation scenario involving Rubric-Based Evaluation

Bad: Labeling quickly without applying project rubric.

Good: Applying rubric criteria, documenting rationale, and escalating uncertainty.

Common mistakes

  • Skipping guideline details for edge cases.
  • Applying inconsistent criteria across similar samples.
  • Avoiding escalation even when uncertain.

Submission checklist

  • Read the latest guideline update before each batch.
  • Apply rubric dimensions explicitly in each decision.
  • Escalate ambiguous items with concise rationale.