Skip to main content

Assessing the performance of prediction models: a framework for traditional and novel measures.

Publication ,  Journal Article
Steyerberg, EW; Vickers, AJ; Cook, NR; Gerds, T; Gonen, M; Obuchowski, N; Pencina, MJ; Kattan, MW
Published in: Epidemiology
January 2010

The performance of prediction models can be assessed using a variety of methods and metrics. Traditional measures for binary and survival outcomes include the Brier score to indicate overall model performance, the concordance (or c) statistic for discriminative ability (or area under the receiver operating characteristic [ROC] curve), and goodness-of-fit statistics for calibration.Several new measures have recently been proposed that can be seen as refinements of discrimination measures, including variants of the c statistic for survival, reclassification tables, net reclassification improvement (NRI), and integrated discrimination improvement (IDI). Moreover, decision-analytic measures have been proposed, including decision curves to plot the net benefit achieved by making decisions based on model predictions.We aimed to define the role of these relatively novel approaches in the evaluation of the performance of prediction models. For illustration, we present a case study of predicting the presence of residual tumor versus benign tissue in patients with testicular cancer (n = 544 for model development, n = 273 for external validation).We suggest that reporting discrimination and calibration will always be important for a prediction model. Decision-analytic measures should be reported if the predictive model is to be used for clinical decisions. Other measures of performance may be warranted in specific applications, such as reclassification metrics to gain insight into the value of adding a novel predictor to an established model.

Duke Scholars

Altmetric Attention Stats
Dimensions Citation Stats

Published In

Epidemiology

DOI

EISSN

1531-5487

Publication Date

January 2010

Volume

21

Issue

1

Start / End Page

128 / 138

Location

United States

Related Subject Headings

  • Risk Assessment
  • Reproducibility of Results
  • ROC Curve
  • Prognosis
  • Models, Statistical
  • Epidemiology
  • Epidemiologic Studies
  • 4905 Statistics
  • 4206 Public health
  • 4202 Epidemiology
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Steyerberg, E. W., Vickers, A. J., Cook, N. R., Gerds, T., Gonen, M., Obuchowski, N., … Kattan, M. W. (2010). Assessing the performance of prediction models: a framework for traditional and novel measures. Epidemiology, 21(1), 128–138. https://doi.org/10.1097/EDE.0b013e3181c30fb2
Steyerberg, Ewout W., Andrew J. Vickers, Nancy R. Cook, Thomas Gerds, Mithat Gonen, Nancy Obuchowski, Michael J. Pencina, and Michael W. Kattan. “Assessing the performance of prediction models: a framework for traditional and novel measures.Epidemiology 21, no. 1 (January 2010): 128–38. https://doi.org/10.1097/EDE.0b013e3181c30fb2.
Steyerberg EW, Vickers AJ, Cook NR, Gerds T, Gonen M, Obuchowski N, et al. Assessing the performance of prediction models: a framework for traditional and novel measures. Epidemiology. 2010 Jan;21(1):128–38.
Steyerberg, Ewout W., et al. “Assessing the performance of prediction models: a framework for traditional and novel measures.Epidemiology, vol. 21, no. 1, Jan. 2010, pp. 128–38. Pubmed, doi:10.1097/EDE.0b013e3181c30fb2.
Steyerberg EW, Vickers AJ, Cook NR, Gerds T, Gonen M, Obuchowski N, Pencina MJ, Kattan MW. Assessing the performance of prediction models: a framework for traditional and novel measures. Epidemiology. 2010 Jan;21(1):128–138.

Published In

Epidemiology

DOI

EISSN

1531-5487

Publication Date

January 2010

Volume

21

Issue

1

Start / End Page

128 / 138

Location

United States

Related Subject Headings

  • Risk Assessment
  • Reproducibility of Results
  • ROC Curve
  • Prognosis
  • Models, Statistical
  • Epidemiology
  • Epidemiologic Studies
  • 4905 Statistics
  • 4206 Public health
  • 4202 Epidemiology