Back to Academy
Relevance 9/10Prompting and EvaluationBeginner6 min read

Hallucination

A hallucination is a plausible-looking model claim that is unsupported or false.

Why it matters for annotators

Detecting hallucinations is a top skill in LLM evaluation and safety work.

Visual mental model

Claim -> source check -> supported or unsupported verdict.

Examples (bad vs good)

Scenario: Real annotation scenario involving Hallucination

Bad: Labeling quickly without applying project rubric.

Good: Applying rubric criteria, documenting rationale, and escalating uncertainty.

Common mistakes

  • Skipping guideline details for edge cases.
  • Applying inconsistent criteria across similar samples.
  • Avoiding escalation even when uncertain.

Submission checklist

  • Read the latest guideline update before each batch.
  • Apply rubric dimensions explicitly in each decision.
  • Escalate ambiguous items with concise rationale.