Back to Academy
Relevance 8/10Training ParadigmsAdvanced7 min read
Reward Model
A reward model predicts human preference signals from ranked examples.
Why it matters for annotators
Reward model quality strongly depends on high-quality ranking data.
Visual mental model
Human rankings -> reward model training -> policy optimization.
Examples (bad vs good)
Scenario: Real annotation scenario involving Reward Model
Bad: Labeling quickly without applying project rubric.
Good: Applying rubric criteria, documenting rationale, and escalating uncertainty.
Common mistakes
- Skipping guideline details for edge cases.
- Applying inconsistent criteria across similar samples.
- Avoiding escalation even when uncertain.
Submission checklist
- Read the latest guideline update before each batch.
- Apply rubric dimensions explicitly in each decision.
- Escalate ambiguous items with concise rationale.