Classification of COVID-19 in chest radiographs: assessing the impact of imaging parameters using clinical and simulated images
As computer-aided diagnostics develop to address new challenges in medical imaging, including emerging diseases such as COVID-19, the initial development is hampered by availability of imaging data. Deep learning algorithms are particularly notorious for performance that tends to improve proportionally to the amount of available data. Simulated images, as available through advanced virtual trials, may present an alternative in data-constrained applications. We begin with our previously trained COVID-19 x-ray classification model (denoted as CVX) that leveraged additional training with existing pre-pandemic chest radiographs to improve classification performance in a set of COVID-19 chest radiographs. The CVX model achieves demonstrably better performance on clinical images compared to an equivalent model that applies standard transfer learning from ImageNet weights. The higher performing CVX model is then shown to generalize effectively to a set of simulated COVID-19 images, both quantitative comparisons of AUCs from clinical to simulated image sets, but also in a qualitative sense where saliency map patterns are consistent when compared between sets. We then stratify the classification results in simulated images to examine dependencies in imaging parameters when patient features are constant. Simulated images show promise in optimizing imaging parameters for accurate classification in data-constrained applications.