Interface Design for Crowdsourcing Hierarchical Multi-Label Text Annotations
Human data labeling is an important and expensive task at the heart of supervised learning systems. Hierarchies help humans understand and organize concepts. We ask whether and how concept hierarchies can inform the design of annotation interfaces to improve labeling quality and efficiency. We study this question through annotation of vaccine misinformation, where the labeling task is difficult and highly subjective. We investigate 6 user interface designs for crowdsourcing hierarchical labels by collecting over 18,000 individual annotations. Under a fixed budget, integrating hierarchies into the design improves crowdsource workers' F1 scores. We attribute this to (1) Grouping similar concepts, improving F1 scores by +0.16 over random groupings, (2) Strong relative performance on high-difficulty examples (relative F1 score difference of +0.40), and (3) Filtering out obvious negatives, increasing precision by +0.07. Ultimately, labeling schemes integrating the hierarchy outperform those that do not - achieving mean F1 of 0.70.
Duke Scholars
Altmetric Attention Stats
Dimensions Citation Stats
Published In
DOI
Publication Date
Related Subject Headings
- 3507 Strategy, management and organisational behaviour
- 1503 Business and Management
- 1202 Building
Citation
Published In
DOI
Publication Date
Related Subject Headings
- 3507 Strategy, management and organisational behaviour
- 1503 Business and Management
- 1202 Building