Skip to main content
Journal cover image

Testing the Relative Performance of Data Adaptive Prediction Algorithms: A Generalized Test of Conditional Risk Differences.

Publication ,  Journal Article
Goldstein, BA; Polley, EC; Briggs, FBS; van der Laan, MJ; Hubbard, A
Published in: Int J Biostat
May 1, 2016

Comparing the relative fit of competing models can be used to address many different scientific questions. In classical statistics one can, if appropriate, use likelihood ratio tests and information based criterion, whereas clinical medicine has tended to rely on comparisons of fit metrics like C-statistics. However, for many data adaptive modelling procedures such approaches are not suitable. In these cases, statisticians have used cross-validation, which can make inference challenging. In this paper we propose a general approach that focuses on the "conditional" risk difference (conditional on the model fits being fixed) for the improvement in prediction risk. Specifically, we derive a Wald-type test statistic and associated confidence intervals for cross-validated test sets utilizing the independent validation within cross-validation in conjunction with a test for multiple comparisons. We show that this test maintains proper Type I Error under the null fit, and can be used as a general test of relative fit for any semi-parametric model alternative. We apply the test to a candidate gene study to test for the association of a set of genes in a genetic pathway.

Duke Scholars

Published In

Int J Biostat

DOI

EISSN

1557-4679

Publication Date

May 1, 2016

Volume

12

Issue

1

Start / End Page

117 / 129

Location

Germany

Related Subject Headings

  • Statistics & Probability
  • Risk Assessment
  • Polymorphism, Single Nucleotide
  • Models, Statistical
  • Machine Learning
  • Humans
  • Genetic Association Studies
  • Data Interpretation, Statistical
  • 4905 Statistics
  • 0104 Statistics
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Goldstein, B. A., Polley, E. C., Briggs, F. B. S., van der Laan, M. J., & Hubbard, A. (2016). Testing the Relative Performance of Data Adaptive Prediction Algorithms: A Generalized Test of Conditional Risk Differences. Int J Biostat, 12(1), 117–129. https://doi.org/10.1515/ijb-2015-0014
Goldstein, Benjamin A., Eric C. Polley, Farren B. S. Briggs, Mark J. van der Laan, and Alan Hubbard. “Testing the Relative Performance of Data Adaptive Prediction Algorithms: A Generalized Test of Conditional Risk Differences.Int J Biostat 12, no. 1 (May 1, 2016): 117–29. https://doi.org/10.1515/ijb-2015-0014.
Goldstein BA, Polley EC, Briggs FBS, van der Laan MJ, Hubbard A. Testing the Relative Performance of Data Adaptive Prediction Algorithms: A Generalized Test of Conditional Risk Differences. Int J Biostat. 2016 May 1;12(1):117–29.
Goldstein, Benjamin A., et al. “Testing the Relative Performance of Data Adaptive Prediction Algorithms: A Generalized Test of Conditional Risk Differences.Int J Biostat, vol. 12, no. 1, May 2016, pp. 117–29. Pubmed, doi:10.1515/ijb-2015-0014.
Goldstein BA, Polley EC, Briggs FBS, van der Laan MJ, Hubbard A. Testing the Relative Performance of Data Adaptive Prediction Algorithms: A Generalized Test of Conditional Risk Differences. Int J Biostat. 2016 May 1;12(1):117–129.
Journal cover image

Published In

Int J Biostat

DOI

EISSN

1557-4679

Publication Date

May 1, 2016

Volume

12

Issue

1

Start / End Page

117 / 129

Location

Germany

Related Subject Headings

  • Statistics & Probability
  • Risk Assessment
  • Polymorphism, Single Nucleotide
  • Models, Statistical
  • Machine Learning
  • Humans
  • Genetic Association Studies
  • Data Interpretation, Statistical
  • 4905 Statistics
  • 0104 Statistics