Back to Academy
Relevance 8/10Prompting and EvaluationIntermediate6 min read
Semantic Search Relevance Labeling
Semantic search relevance labeling scores whether retrieved items satisfy intent and context.
Why it matters for annotators
It improves ranking quality in retrieval-augmented systems.
Visual mental model
Query + retrieved item -> relevance score.
Examples (bad vs good)
Scenario: Real annotation scenario involving Semantic Search Relevance Labeling
Bad: Labeling quickly without applying project rubric.
Good: Applying rubric criteria, documenting rationale, and escalating uncertainty.
Common mistakes
- Skipping guideline details for edge cases.
- Applying inconsistent criteria across similar samples.
- Avoiding escalation even when uncertain.
Submission checklist
- Read the latest guideline update before each batch.
- Apply rubric dimensions explicitly in each decision.
- Escalate ambiguous items with concise rationale.