Evaluating Binary Outcome Classifiers Estimated from Survey Data.
Surveys are commonly used to facilitate research in epidemiology, health, and the social and behavioral sciences. Often, these surveys are not simple random samples, and respondents are given weights reflecting their probability of selection into the survey. We show that using survey weights can be beneficial for evaluating the quality of predictive models when splitting data into training and test sets. In particular, we characterize model assessment statistics, such as sensitivity and specificity, as finite population quantities and compute survey-weighted estimates of these quantities with test data comprising a random subset of the original data. Using simulations with data from the National Survey on Drug Use and Health and the National Comorbidity Survey, we show that unweighted metrics estimated with sample test data can misrepresent population performance, but weighted metrics appropriately adjust for the complex sampling design. We also show that this conclusion holds for models trained using upsampling for mitigating class imbalance. The results suggest that weighted metrics should be used when evaluating performance on test data derived from complex surveys.
Duke Scholars
Published In
DOI
EISSN
ISSN
Publication Date
Volume
Issue
Start / End Page
Related Subject Headings
- Sensitivity and Specificity
- Models, Statistical
- Humans
- Health Surveys
- Epidemiology
- Computer Simulation
- 4905 Statistics
- 4206 Public health
- 4202 Epidemiology
- 1117 Public Health and Health Services
Citation
Published In
DOI
EISSN
ISSN
Publication Date
Volume
Issue
Start / End Page
Related Subject Headings
- Sensitivity and Specificity
- Models, Statistical
- Humans
- Health Surveys
- Epidemiology
- Computer Simulation
- 4905 Statistics
- 4206 Public health
- 4202 Epidemiology
- 1117 Public Health and Health Services