Volumetric, dual-domain x-ray CT reconstruction with deep learning
Novel deep learning (DL) methods have produced state-of-the-art results in nearly every area of x-ray CT data processing. However, DL-driven, iterative reconstruction remains challenging because of the volumetric nature of many reconstruction problems: the system matrix relating the projection and image domains is too large to incorporate into network training. Past approaches for 2D reconstruction include consecutive projection and image domain processing in a single pass, employing a known analytical operator between domains, and unrolling an established iterative method, solving a series of sub-problems with separately trained networks. Here, we synergize these approaches within the split Bregman optimization framework. Specifically, we formulate a cost function and supervised training approach which yield an analytical reconstruction sub-step, to transform between domains, and a regularization sub-step, which is consistent between iterations. Combined with projection and image domain splitting, these properties reduce the number of free parameters which must be learned, making volumetric, dual-domain data processing more practical. Here, we simultaneously train 3D image and projection domain regularizers using supervised learning during iterative reconstruction with promising algorithm convergence results. Our trained reconstruction framework outperforms a more traditional iterative reconstruction method when starting from 90 noisy projections of the MOBY mouse phantom (image SSIM: iterative, 0.65; DL, 0.86). Furthermore, we successfully apply the model to similarly sampled in vivo mouse data acquired with micro-CT, reducing noise from 277 HU in an initial reconstruction to 41 HU in the DL reconstruction compared with 63 HU of noise in a fully sampled reference reconstruction.