Skip to main content

An interpretable deep learning workflow for discovering subvisual abnormalities in CT scans of COVID-19 inpatients and survivors

Publication ,  Journal Article
Zhou, L; Meng, X; Huang, Y; Kang, K; Zhou, J; Chu, Y; Li, H; Xie, D; Zhang, J; Yang, W; Bai, N; Zhao, Y; Zhao, M; Wang, G; Carin, L; Yu, K ...
Published in: Nature Machine Intelligence
May 1, 2022

Tremendous efforts have been made to improve diagnosis and treatment of COVID-19, but knowledge on long-term complications is limited. In particular, a large portion of survivors has respiratory complications, but currently, experienced radiologists and state-of-the-art artificial intelligence systems are not able to detect many abnormalities from follow-up computerized tomography (CT) scans of COVID-19 survivors. Here we propose Deep-LungParenchyma-Enhancing (DLPE), a computer-aided detection (CAD) method for detecting and quantifying pulmonary parenchyma lesions on chest CT. Through proposing a number of deep-learning-based segmentation models and assembling them in an interpretable manner, DLPE removes irrelevant tissues from the perspective of pulmonary parenchyma, and calculates the scan-level optimal window, which considerably enhances parenchyma lesions relative to the lung window. Aided by DLPE, radiologists discovered novel and interpretable lesions from COVID-19 inpatients and survivors, which were previously invisible under the lung window. Based on DLPE, we removed the scan-level bias of CT scans, and then extracted precise radiomics from such novel lesions. We further demonstrated that these radiomics have strong predictive power for key COVID-19 clinical metrics on an inpatient cohort of 1,193 CT scans and for sequelae on a survivor cohort of 219 CT scans. Our work sheds light on the development of interpretable medical artificial intelligence and showcases how artificial intelligence can discover medical findings that are beyond sight.

Duke Scholars

Altmetric Attention Stats
Dimensions Citation Stats

Published In

Nature Machine Intelligence

DOI

EISSN

2522-5839

Publication Date

May 1, 2022

Volume

4

Issue

5

Start / End Page

494 / 503

Related Subject Headings

  • 46 Information and computing sciences
  • 40 Engineering
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Zhou, L., Meng, X., Huang, Y., Kang, K., Zhou, J., Chu, Y., … Gao, X. (2022). An interpretable deep learning workflow for discovering subvisual abnormalities in CT scans of COVID-19 inpatients and survivors. Nature Machine Intelligence, 4(5), 494–503. https://doi.org/10.1038/s42256-022-00483-7
Zhou, L., X. Meng, Y. Huang, K. Kang, J. Zhou, Y. Chu, H. Li, et al. “An interpretable deep learning workflow for discovering subvisual abnormalities in CT scans of COVID-19 inpatients and survivors.” Nature Machine Intelligence 4, no. 5 (May 1, 2022): 494–503. https://doi.org/10.1038/s42256-022-00483-7.
Zhou L, Meng X, Huang Y, Kang K, Zhou J, Chu Y, et al. An interpretable deep learning workflow for discovering subvisual abnormalities in CT scans of COVID-19 inpatients and survivors. Nature Machine Intelligence. 2022 May 1;4(5):494–503.
Zhou, L., et al. “An interpretable deep learning workflow for discovering subvisual abnormalities in CT scans of COVID-19 inpatients and survivors.” Nature Machine Intelligence, vol. 4, no. 5, May 2022, pp. 494–503. Scopus, doi:10.1038/s42256-022-00483-7.
Zhou L, Meng X, Huang Y, Kang K, Zhou J, Chu Y, Li H, Xie D, Zhang J, Yang W, Bai N, Zhao Y, Zhao M, Wang G, Carin L, Xiao X, Yu K, Qiu Z, Gao X. An interpretable deep learning workflow for discovering subvisual abnormalities in CT scans of COVID-19 inpatients and survivors. Nature Machine Intelligence. 2022 May 1;4(5):494–503.

Published In

Nature Machine Intelligence

DOI

EISSN

2522-5839

Publication Date

May 1, 2022

Volume

4

Issue

5

Start / End Page

494 / 503

Related Subject Headings

  • 46 Information and computing sciences
  • 40 Engineering