A calibration hierarchy for risk models was defined: from utopia to empirical data.

Published

Journal Article

OBJECTIVE: Calibrated risk models are vital for valid decision support. We define four levels of calibration and describe implications for model development and external validation of predictions. STUDY DESIGN AND SETTING: We present results based on simulated data sets. RESULTS: A common definition of calibration is "having an event rate of R% among patients with a predicted risk of R%," which we refer to as "moderate calibration." Weaker forms of calibration only require the average predicted risk (mean calibration) or the average prediction effects (weak calibration) to be correct. "Strong calibration" requires that the event rate equals the predicted risk for every covariate pattern. This implies that the model is fully correct for the validation setting. We argue that this is unrealistic: the model type may be incorrect, the linear predictor is only asymptotically unbiased, and all nonlinear and interaction effects should be correctly modeled. In addition, we prove that moderate calibration guarantees nonharmful decision making. Finally, results indicate that a flexible assessment of calibration in small validation data sets is problematic. CONCLUSION: Strong calibration is desirable for individualized decision support but unrealistic and counter productive by stimulating the development of overly complex models. Model development and external validation should focus on moderate calibration.

Full Text

Duke Authors

Cited Authors

  • Van Calster, B; Nieboer, D; Vergouwe, Y; De Cock, B; Pencina, MJ; Steyerberg, EW

Published Date

  • June 2016

Published In

Volume / Issue

  • 74 /

Start / End Page

  • 167 - 176

PubMed ID

  • 26772608

Pubmed Central ID

  • 26772608

Electronic International Standard Serial Number (EISSN)

  • 1878-5921

Digital Object Identifier (DOI)

  • 10.1016/j.jclinepi.2015.12.005

Language

  • eng

Conference Location

  • United States