Unsupervised learning of robust models for cardiac and photon-counting x-ray CT denoising
Supervised deep learning methods have rapidly advanced to the forefront of medical imaging research; however, they face limitations in advanced CT applications. For instance, label generation is difficult for retrospective cardiac CT data where dose modulation yields variable image quality by phase. Similarly, a model trained to denoise multi-energy data may not generalize across differing contrast and energy channel counts. In this work, we propose and demonstrate several innovations for unsupervised denoising of such multi-channel CT data sets with a focus on improving model robustness to varying noise levels, image contrast, and channel counts. Specifically, we update our past Constrained Bregman Framework for unsupervised denoising to simplify model training procedures, to automate regularization hyperparameter selection, and to spatially adapt denoising performance. Furthermore, we propose a new data normalization strategy involving weighted singular value decomposition, convolutional mean subtraction, and noise variance scaling which abstracts the image denoising problem from the contrast and channel count of the input data. We combine these improvements with a hybrid Inception Net, half U-Net network structure, ideally suited for 3D data processing, and a plug-and- play scheme for incorporating traditional regularizers into deep learning cost functions. We demonstrate the value of these improvements for network training and denoising using retrospectively gated cardiac photon-counting CT data acquired with a Siemens NAEOTOM Alpha scanner (2-3x noise reduction). This clinical model robustly generalizes to 40-channel preclinical cardiac, photon-counting CT data acquired in a mouse without the need for additional training (6- 10x noise reduction, negligible intensity bias).