Skip to main content

Classification performance bias between training and test sets in a limited mammography dataset.

Publication ,  Journal Article
Hou, R; Lo, JY; Marks, JR; Hwang, ES; Grimm, LJ
Published in: medRxiv
February 23, 2023

OBJECTIVES: To assess the performance bias caused by sampling data into training and test sets in a mammography radiomics study. METHODS: Mammograms from 700 women were used to study upstaging of ductal carcinoma in situ. The dataset was repeatedly shuffled and split into training (n=400) and test cases (n=300) forty times. For each split, cross-validation was used for training, followed by an assessment of the test set. Logistic regression with regularization and support vector machine were used as the machine learning classifiers. For each split and classifier type, multiple models were created based on radiomics and/or clinical features. RESULTS: Area under the curve (AUC) performances varied considerably across the different data splits (e.g., radiomics regression model: train 0.58-0.70, test 0.59-0.73). Performances for regression models showed a tradeoff where better training led to worse testing and vice versa. Cross-validation over all cases reduced this variability, but required samples of 500+ cases to yield representative estimates of performance. CONCLUSIONS: In medical imaging, clinical datasets are often limited to relatively small size. Models built from different training sets may not be representative of the whole dataset. Depending on the selected data split and model, performance bias could lead to inappropriate conclusions that might influence the clinical significance of the findings. Optimal strategies for test set selection should be developed to ensure study conclusions are appropriate.

Duke Scholars

Altmetric Attention Stats
Dimensions Citation Stats

Published In

medRxiv

DOI

Publication Date

February 23, 2023

Location

United States
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Hou, R., Lo, J. Y., Marks, J. R., Hwang, E. S., & Grimm, L. J. (2023). Classification performance bias between training and test sets in a limited mammography dataset. MedRxiv. https://doi.org/10.1101/2023.02.15.23285985
Hou, Rui, Joseph Y. Lo, Jeffrey R. Marks, E Shelley Hwang, and Lars J. Grimm. “Classification performance bias between training and test sets in a limited mammography dataset.MedRxiv, February 23, 2023. https://doi.org/10.1101/2023.02.15.23285985.
Hou R, Lo JY, Marks JR, Hwang ES, Grimm LJ. Classification performance bias between training and test sets in a limited mammography dataset. medRxiv. 2023 Feb 23;
Hou, Rui, et al. “Classification performance bias between training and test sets in a limited mammography dataset.MedRxiv, Feb. 2023. Pubmed, doi:10.1101/2023.02.15.23285985.
Hou R, Lo JY, Marks JR, Hwang ES, Grimm LJ. Classification performance bias between training and test sets in a limited mammography dataset. medRxiv. 2023 Feb 23;

Published In

medRxiv

DOI

Publication Date

February 23, 2023

Location

United States