Abstract
Research on the automated extraction of essential data from an electrocardiography (ECG) recording has been a significant topic for a long time. The main focus of digital processing processes is to measure fiducial points that determine the beginning and end of the P, QRS, and T waves based on their waveform properties. The presence of unavoidable noise during ECG data collection and inherent physiological differences among individuals make it challenging to accurately identify these reference points, resulting in suboptimal performance. This is done through several primary stages that rely on the idea of preliminary processing of the ECG electrical signal through a set of steps (preparing raw data and converting them into files that are read and then processed by removing empty data and unifying the width of the signal at a length of 250 in order to remove noise accurately, and then performing the process of identifying the QRS in the first place and P-T implicitly, and then the task stage is determining the required peak and making a cut based on it. The U-Net pre-trained model is used for deep learning. It takes an ECG signal with a customisable sampling rate as input and generates a list of the beginning and ending points of P and T waves, as well as QRS complexes, as output. The distinguishing features of our segmentation method are its high speed, minimal parameter requirements, and strong generalization capabilities, which are used to create data that can be used in diagnosing diseases or biometric systems.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.