Understanding interobserver agreement: the kappa statistic.

Published

Journal Article

Items such as physical exam findings, radiographic interpretations, or other diagnostic tests often rely on some degree of subjective interpretation by observers. Studies that measure the agreement between two or more observers should include a statistic that takes into account the fact that observers will sometimes agree or disagree simply by chance. The kappa statistic (or kappa coefficient) is the most commonly used statistic for this purpose. A kappa of 1 indicates perfect agreement, whereas a kappa of 0 indicates agreement equivalent to chance. A limitation of kappa is that it is affected by the prevalence of the finding under observation. Methods to overcome this limitation have been described.

Full Text

Duke Authors

Cited Authors

  • Viera, AJ; Garrett, JM

Published Date

  • May 2005

Published In

Volume / Issue

  • 37 / 5

Start / End Page

  • 360 - 363

PubMed ID

  • 15883903

Pubmed Central ID

  • 15883903

International Standard Serial Number (ISSN)

  • 0742-3225

Language

  • eng

Conference Location

  • United States