A Deep-Learning-Based Multi-modal ECG and PCG Processing Framework for Label Efficient Heart Sound Segmentation
The COVID-19 pandemic has intensified the need for home-based cardiac health monitoring systems. Despite advancements in electrocardiograph (ECG) and phonocardiogram (PCG) wearable sensors, accurate heart sound segmentation algorithms remain understudied. Existing deep learning models, such as convolutional neural networks (CNN) and recurrent neural networks (RNN), struggle to segment noisy signals using only PCG data. We propose a two-step heart sound segmentation algorithm that analyzes synchronized ECG and PCG signals. The first step involves heartbeat detection using a CNN-LSTM-based model on ECG data, and the second step focuses on beat-wise heart sound segmentation with a 1D U-Net that incorporates multi-modal inputs. Our method leverages temporal correlation between ECG and PCG signals to enhance segmentation performance. To tackle the label-hungry issue in AI-supported biomedical studies, we introduce a segment-wise contrastive learning technique for signal segmentation, overcoming the limitations of traditional contrastive learning methods designed for classification tasks. We evaluated our two-step algorithm using the PhysioNet 2016 dataset and a private dataset from Bayland Scientific, obtaining a 96.43 F1 score on the former. Notably, our segment-wise contrastive learning technique demonstrated effective performance with limited labeled data. When trained on just 1% of labeled PhysioNet data, the model pre-trained on the full unlabeled dataset only dropped 2.88 in the F1 score, outperforming the SimCLR method. Overall, our proposed algorithm and learning technique present promise for improving heart sound segmentation and reducing the need for labeled data.