Background and Objective:Electrocardiogram (ECG) is one of the most important diagnostic tools for cardiovascular diseases (CVDs). Recent studies show that deep learning models can be trained using labeled ECGs to achieve automatic detection of CVDs, assisting cardiologists in diagnosis. However, the deep learning models heavily rely on labels in training, while manual labeling is costly and time-consuming. This paper proposes a new self-supervised learning (SSL) method for multilead ECGs: bootstrap each lead’s latent (BELL) to reduce the reliance and boost model performance in various tasks, especially when training data are insufficient. Method:BELL is a variant of the well-known bootstrap your own latent (BYOL). The BELL aims to learn prior knowledge from unlabeled ECGs by pretraining, benefitting downstream tasks. It leverages the characteristics of multilead ECGs. First, BELL uses the multiple-branch skeleton, which is more effective in processing multilead ECGs. Moreover, it proposes intra-lead and inter-lead mean square error (MSE) to guide pretraining, and their fusion can result in better performances. Additionally, BELL inherits the main advantage of the BYOL: No negative pair is used in pretraining, making it more efficient. Results:In most cases, BELL surpasses previous works in the experiments. More importantly, the pretraining improves model performances by 0.69% ∼ 8.89% in downstream tasks when only 10% of training data are available. Furthermore, BELL shows excellent adaptability to uncurated ECG data from a real-world hospital. Only slight performance degradation occurs (<1% in most cases) when using these data. Conclusion:The results suggest that the BELL can alleviate the reliance on manual ECG labels from cardiologists, a critical bottleneck of the current deep learning-based models. In this way, the BELL can also help deep learning extend its application on automatic ECG analysis, reducing the cardiologists’ burden in real-world diagnosis.
Read full abstract