Skip to main content

Deep learning of 3D computed tomography (CT) images for organ segmentation using 2D multi-channel SegNet model

Publication ,  Conference
Liu, Y; Fu, W; Selvakumaran, V; Phelan, M; Segars, WP; Samei, E; Mazurowski, M; Lo, JY; Rubin, GD; Henao, R
Published in: Progress in Biomedical Optics and Imaging - Proceedings of SPIE
January 1, 2019

Purpose To accurately segment organs from 3D CT image volumes using a 2D, multi-channel SegNet model consisting of a deep Convolutional Neural Network (CNN) encoder-decoder architecture. Method We trained a SegNet model on the extended cardiac-Torso (XCAT) dataset, which was previously constructed based on patient Chest-Abdomen-Pelvis (CAP) Computed Tomography (CT) studies from 50 Duke patients. Each study consists of one low-resolution (5-mm section thickness) 3D CT image volume and its corresponding 3D, manually labeled volume. To improve modeling on such small sample size regime, we performed median frequency class balancing weighting in the loss function of the SegNet, data normalization adjusting for intensity coverage of CT volumes, data transformation to harmonize voxel resolution, CT section extrapolation to virtually increase the number of transverse sections available as inputs to the 2D multi-channel model, and data augmentation to simulate mildly rotated volumes. To assess model performance, we calculated Dice coefficients on a held-out test set, as well as qualitative evaluation of segmentation on high-resolution CTs. Further, we incorporated 50 patients high-resolution CTs with manually-labeled kidney segmentation masks for the purpose of quantitatively evaluating the performance of our XCAT trained segmentation model. The entire study was conducted from raw, identifiable data within the Duke Protected Analytics Computing Environment (PACE). Result We achieved median Dice coefficients over 0.8 for most organs and structures on XCAT test instances and observed good performance on additional images without manual segmentation labels, qualitatively evaluated by Duke Radiology experts. Moreover, we achieved 0.89 median Dice Coefficients for kidneys on high-resolution CTs. Conclusion 2D, multi-channel models like SegNet are effective for organ segmentations of 3D CT image volumes, achieving high segmentation accuracies.

Duke Scholars

Published In

Progress in Biomedical Optics and Imaging - Proceedings of SPIE

DOI

ISSN

1605-7422

ISBN

9781510625556

Publication Date

January 1, 2019

Volume

10954
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Liu, Y., Fu, W., Selvakumaran, V., Phelan, M., Segars, W. P., Samei, E., … Henao, R. (2019). Deep learning of 3D computed tomography (CT) images for organ segmentation using 2D multi-channel SegNet model. In Progress in Biomedical Optics and Imaging - Proceedings of SPIE (Vol. 10954). https://doi.org/10.1117/12.2512887
Liu, Y., W. Fu, V. Selvakumaran, M. Phelan, W. P. Segars, E. Samei, M. Mazurowski, J. Y. Lo, G. D. Rubin, and R. Henao. “Deep learning of 3D computed tomography (CT) images for organ segmentation using 2D multi-channel SegNet model.” In Progress in Biomedical Optics and Imaging - Proceedings of SPIE, Vol. 10954, 2019. https://doi.org/10.1117/12.2512887.
Liu Y, Fu W, Selvakumaran V, Phelan M, Segars WP, Samei E, et al. Deep learning of 3D computed tomography (CT) images for organ segmentation using 2D multi-channel SegNet model. In: Progress in Biomedical Optics and Imaging - Proceedings of SPIE. 2019.
Liu, Y., et al. “Deep learning of 3D computed tomography (CT) images for organ segmentation using 2D multi-channel SegNet model.” Progress in Biomedical Optics and Imaging - Proceedings of SPIE, vol. 10954, 2019. Scopus, doi:10.1117/12.2512887.
Liu Y, Fu W, Selvakumaran V, Phelan M, Segars WP, Samei E, Mazurowski M, Lo JY, Rubin GD, Henao R. Deep learning of 3D computed tomography (CT) images for organ segmentation using 2D multi-channel SegNet model. Progress in Biomedical Optics and Imaging - Proceedings of SPIE. 2019.

Published In

Progress in Biomedical Optics and Imaging - Proceedings of SPIE

DOI

ISSN

1605-7422

ISBN

9781510625556

Publication Date

January 1, 2019

Volume

10954