Abstract

This paper presents a novel heart sound segmentation algorithm based on Temporal-Framing Adaptive Network (TFAN), including state transition loss and dynamic inference. In contrast to previous state-of-the-art approaches, TFAN does not require any prior knowledge of the state duration of heart sounds and is therefore likely to generalize to non sinus rhythm. TFAN was trained on 50 recordings randomly chosen from Training set A of the 2016 PhysioNet/Computer in Cardiology Challenge and tested on the other 12 independent databases (2,099 recordings and 52,180 beats). And further testing of performance was conducted on databases with three levels of increasing difficulty (LEVEL-I, -II and -III). TFAN achieved a superior F1 score for all 12 databases except for 'Test-B,' with an average of 96.72%, compared to 94.56% for logistic regression hidden semi-Markov model (LR-HSMM) and 94.18% for bidirectional gated recurrent neural network (BiGRNN). Moreover, TFAN achieved an overall F1 score of 99.21%, 94.17%, 91.31% on LEVEL-I, -II and -III databases respectively, compared to 98.37%, 87.56%, 78.46% for LR-HSMM and 99.01%, 92.63%, 88.45% for BiGRNN. TFAN therefore provides a substantial improvement on heart sound segmentation while using less parameters compared to BiGRNN. The proposed method is highly flexible and likely to apply to other non-stationary time series. Further work is required to understand to what extent this approach will provide improved diagnostic performance, although it is logical to assume superior segmentation will lead to improved diagnostics.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.