Skip to main content

Deep learning based spectral extrapolation for dual-source, dual-energy x-ray computed tomography.

Publication ,  Journal Article
Clark, DP; Schwartz, FR; Marin, D; Ramirez-Giraldo, JC; Badea, CT
Published in: Med Phys
September 2020

PURPOSE: Data completion is commonly employed in dual-source, dual-energy computed tomography (CT) when physical or hardware constraints limit the field of view (FoV) covered by one of two imaging chains. Practically, dual-energy data completion is accomplished by estimating missing projection data based on the imaging chain with the full FoV and then by appropriately truncating the analytical reconstruction of the data with the smaller FoV. While this approach works well in many clinical applications, there are applications which would benefit from spectral contrast estimates over the larger FoV (spectral extrapolation)-e.g. model-based iterative reconstruction, contrast-enhanced abdominal imaging of large patients, interior tomography, and combined temporal and spectral imaging. METHODS: To document the fidelity of spectral extrapolation and to prototype a deep learning algorithm to perform it, we assembled a data set of 50 dual-source, dual-energy abdominal x-ray CT scans (acquired at Duke University Medical Center with 5 Siemens Flash scanners; chain A: 50 cm FoV, 100 kV; chain B: 33 cm FoV, 140 kV + Sn; helical pitch: 0.8). Data sets were reconstructed using ReconCT (v14.1, Siemens Healthineers): 768 × 768 pixels per slice, 50 cm FoV, 0.75 mm slice thickness, "Dual-Energy - WFBP" reconstruction mode with dual-source data completion. A hybrid architecture consisting of a learned piecewise linear transfer function (PLTF) and a convolutional neural network (CNN) was trained using 40 scans (five scans reserved for validation, five for testing). The PLTF learned to map chain A spectral contrast to chain B spectral contrast voxel-wise, performing an image domain analog of dual-source data completion with approximate spectral reweighting. The CNN with its U-net structure then learned to improve the accuracy of chain B contrast estimates by copying chain A structural information, by encoding prior chain A, chain B contrast relationships, and by generalizing feature-contrast associations. Training was supervised, using data from within the 33-cm chain B FoV to optimize and assess network performance. RESULTS: Extrapolation performance on the testing data confirmed our network's robustness and ability to generalize to unseen data from different patients, yielding maximum extrapolation errors of 26 HU following the PLTF and 7.5 HU following the CNN (averaged per target organ). Degradation of network performance when applied to a geometrically simple phantom confirmed our method's reliance on feature-contrast relationships in correctly inferring spectral contrast. Integrating our image domain spectral extrapolation network into a standard dual-source, dual-energy processing pipeline for Siemens Flash scanner data yielded spectral CT data with adequate fidelity for the generation of both 50 keV monochromatic images and material decomposition images over a 30-cm FoV for chain B when only 20 cm of chain B data were available for spectral extrapolation. CONCLUSIONS: Even with a moderate amount of training data, deep learning methods are capable of robustly inferring spectral contrast from feature-contrast relationships in spectral CT data, leading to spectral extrapolation performance well beyond what may be expected at face value. Future work reconciling spectral extrapolation results with original projection data is expected to further improve results in outlying and pathological cases.

Duke Scholars

Altmetric Attention Stats
Dimensions Citation Stats

Published In

Med Phys

DOI

EISSN

2473-4209

Publication Date

September 2020

Volume

47

Issue

9

Start / End Page

4150 / 4163

Location

United States

Related Subject Headings

  • X-Rays
  • Tomography, X-Ray Computed
  • Phantoms, Imaging
  • Nuclear Medicine & Medical Imaging
  • Humans
  • Deep Learning
  • Algorithms
  • 5105 Medical and biological physics
  • 4003 Biomedical engineering
  • 1112 Oncology and Carcinogenesis
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Clark, D. P., Schwartz, F. R., Marin, D., Ramirez-Giraldo, J. C., & Badea, C. T. (2020). Deep learning based spectral extrapolation for dual-source, dual-energy x-ray computed tomography. Med Phys, 47(9), 4150–4163. https://doi.org/10.1002/mp.14324
Clark, Darin P., Fides R. Schwartz, Daniele Marin, Juan C. Ramirez-Giraldo, and Cristian T. Badea. “Deep learning based spectral extrapolation for dual-source, dual-energy x-ray computed tomography.Med Phys 47, no. 9 (September 2020): 4150–63. https://doi.org/10.1002/mp.14324.
Clark DP, Schwartz FR, Marin D, Ramirez-Giraldo JC, Badea CT. Deep learning based spectral extrapolation for dual-source, dual-energy x-ray computed tomography. Med Phys. 2020 Sep;47(9):4150–63.
Clark, Darin P., et al. “Deep learning based spectral extrapolation for dual-source, dual-energy x-ray computed tomography.Med Phys, vol. 47, no. 9, Sept. 2020, pp. 4150–63. Pubmed, doi:10.1002/mp.14324.
Clark DP, Schwartz FR, Marin D, Ramirez-Giraldo JC, Badea CT. Deep learning based spectral extrapolation for dual-source, dual-energy x-ray computed tomography. Med Phys. 2020 Sep;47(9):4150–4163.

Published In

Med Phys

DOI

EISSN

2473-4209

Publication Date

September 2020

Volume

47

Issue

9

Start / End Page

4150 / 4163

Location

United States

Related Subject Headings

  • X-Rays
  • Tomography, X-Ray Computed
  • Phantoms, Imaging
  • Nuclear Medicine & Medical Imaging
  • Humans
  • Deep Learning
  • Algorithms
  • 5105 Medical and biological physics
  • 4003 Biomedical engineering
  • 1112 Oncology and Carcinogenesis