Electrocardiographic (ECG) R-peak detection is essential for every sensor-based cardiovascular health monitoring system. To validate R-peak detectors, comparing the predicted results with reference annotations is crucial. This comparison is typically performed using tools provided by the waveform database (WFDB) or custom methods. However, many studies fail to provide detailed information on the validation process. The literature also highlights inconsistencies in reporting window size, a crucial parameter used to compare predictions with expert annotations to distinguish false peaks from the true R-peak. Additionally, there is also a need for uniformity in reporting the total number of beats for individual or collective records of the widely used MIT-BIH arrhythmia database. Thus, we aim to review validation methods of various R-peak detection methodologies before their implementation in real time. This review discusses the impact of non-beat annotations when using a custom validation method, allowable window tolerance, the effects of window size deviations, and implications of varying numbers of beats and skipping segments on ECG testing, providing a comprehensive guide for researchers. Addressing these validation gaps is critical as they can significantly affect validatory outcomes. Finally, the conclusion section proposes a structured concept as a future approach, a guide to integrate WFDB R-peak validation tools for testing any QRS annotated ECG database. Overall, this review underscores the importance of complete transparency in reporting testing procedures, which prevents misleading assessments of R-peak detection algorithms and enables fair methodological comparison.
Read full abstract