Abstract

Background and objectiveMost of the existing machine learning-based heart sound classification methods achieve limited accuracy. Since they primarily depend on single domain feature information and tend to focus equally on each part of the signal rather than employing a selective attention mechanism. In addition, they fail to exploit convolutional neural network (CNN) - based features with an effective fusion strategy. MethodsIn order to overcome these limitations, a novel multimodal attention convolutional neural network (MACNN) with a feature-level fusion strategy, in which Mel-cepstral domain as well as general frequency domain features are incorporated to increase the diversity of the features, is proposed in this paper. In the proposed method, DilationAttenNet is first utilized to construct attention-based CNN feature extractors and then these feature extractors are jointly optimized in MACNN at the feature-level. The attention mechanism aims to suppress irrelevant information and focus on crucial diverse features extracted from the CNN. ResultsExtensive experiments are carried out to study the efficacy of the feature level fusion in comparison to that with early fusion. The results show that the proposed MACNN method significantly outperforms the state-of-the-art approaches in terms of accuracy and score for the two publicly available Github and Physionet datasets. ConclusionThe findings of our experiments demonstrated the high performance for heart sound classification based on the proposed MACNN, and hence have potential clinical usefulness in the identification of heart diseases. This technique can assist cardiologists and researchers in the design and development of heart sound classification methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call