Abstract

At present, deep learning based heart sound diagnosis algorithms are mostly complex and large models for high accuracy, which are difficult to deploy on mobile devices due to the high number of parameters and large computational cost. The current mainstream approach for processing heart sound signals involves utilizing their Mel-frequency cepstral coefficients (MFCC) features. However, most existing methods have overlooked the multi-channel characteristics of MFCC. To address this issue, we propose a Quaternion Dynamic Representation with Joint Learning (QDRJL) neural network for learning MFCC multi-channel features. Our proposed approach combines quaternion dynamic convolution with dynamic weighting and the Quaternion Interior Learning Block (QILB). Finally, we present a global and energy joint learning branch for jointly learning MFCC features. The success of the proposed quaternion network depends on its ability to utilize the internal relations between quaternion-valued input features and the definition of the dynamic weight variables in the augmented quaternion domain. We assessed various state-of-the-art classification algorithms for detecting heart sounds and found that our proposed classifier achieved an accuracy of up to 97.2%, outperforming existing models. Our experimental evaluation, using the 2016 PhysioNet/CinC Challenge dataset, revealed that our model could reduce the number of network parameters to 25% due to quaternion properties.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call