Skip to main content

A Deep-Learning-Based Multi-modal ECG and PCG Processing Framework for Label Efficient Heart Sound Segmentation

Publication ,  Conference
Huang, Q; Yang, H; Zeng, E; Chen, Y
Published in: Proceedings - 2024 IEEE/ACM Conference on Connected Health: Applications, Systems and Engineering Technologies, CHASE 2024
January 1, 2024

The COVID-19 pandemic has intensified the need for home-based cardiac health monitoring systems. Despite advancements in electrocardiograph (ECG) and phonocardiogram (PCG) wearable sensors, accurate heart sound segmentation algorithms remain understudied. Existing deep learning models, such as convolutional neural networks (CNN) and recurrent neural networks (RNN), struggle to segment noisy signals using only PCG data. We propose a two-step heart sound segmentation algorithm that analyzes synchronized ECG and PCG signals. The first step involves heartbeat detection using a CNN-LSTM-based model on ECG data, and the second step focuses on beat-wise heart sound segmentation with a 1D U-Net that incorporates multi-modal inputs. Our method leverages temporal correlation between ECG and PCG signals to enhance segmentation performance. To tackle the label-hungry issue in AI-supported biomedical studies, we introduce a segment-wise contrastive learning technique for signal segmentation, overcoming the limitations of traditional contrastive learning methods designed for classification tasks. We evaluated our two-step algorithm using the PhysioNet 2016 dataset and a private dataset from Bayland Scientific, obtaining a 96.43 F1 score on the former. Notably, our segment-wise contrastive learning technique demonstrated effective performance with limited labeled data. When trained on just 1% of labeled PhysioNet data, the model pre-trained on the full unlabeled dataset only dropped 2.88 in the F1 score, outperforming the SimCLR method. Overall, our proposed algorithm and learning technique present promise for improving heart sound segmentation and reducing the need for labeled data.

Duke Scholars

Published In

Proceedings - 2024 IEEE/ACM Conference on Connected Health: Applications, Systems and Engineering Technologies, CHASE 2024

DOI

Publication Date

January 1, 2024

Start / End Page

109 / 119
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Huang, Q., Yang, H., Zeng, E., & Chen, Y. (2024). A Deep-Learning-Based Multi-modal ECG and PCG Processing Framework for Label Efficient Heart Sound Segmentation. In Proceedings - 2024 IEEE/ACM Conference on Connected Health: Applications, Systems and Engineering Technologies, CHASE 2024 (pp. 109–119). https://doi.org/10.1109/CHASE60773.2024.00020
Huang, Q., H. Yang, E. Zeng, and Y. Chen. “A Deep-Learning-Based Multi-modal ECG and PCG Processing Framework for Label Efficient Heart Sound Segmentation.” In Proceedings - 2024 IEEE/ACM Conference on Connected Health: Applications, Systems and Engineering Technologies, CHASE 2024, 109–19, 2024. https://doi.org/10.1109/CHASE60773.2024.00020.
Huang Q, Yang H, Zeng E, Chen Y. A Deep-Learning-Based Multi-modal ECG and PCG Processing Framework for Label Efficient Heart Sound Segmentation. In: Proceedings - 2024 IEEE/ACM Conference on Connected Health: Applications, Systems and Engineering Technologies, CHASE 2024. 2024. p. 109–19.
Huang, Q., et al. “A Deep-Learning-Based Multi-modal ECG and PCG Processing Framework for Label Efficient Heart Sound Segmentation.” Proceedings - 2024 IEEE/ACM Conference on Connected Health: Applications, Systems and Engineering Technologies, CHASE 2024, 2024, pp. 109–19. Scopus, doi:10.1109/CHASE60773.2024.00020.
Huang Q, Yang H, Zeng E, Chen Y. A Deep-Learning-Based Multi-modal ECG and PCG Processing Framework for Label Efficient Heart Sound Segmentation. Proceedings - 2024 IEEE/ACM Conference on Connected Health: Applications, Systems and Engineering Technologies, CHASE 2024. 2024. p. 109–119.

Published In

Proceedings - 2024 IEEE/ACM Conference on Connected Health: Applications, Systems and Engineering Technologies, CHASE 2024

DOI

Publication Date

January 1, 2024

Start / End Page

109 / 119