Abstract
Improving medical text intent classification accuracy can assist the medical field in achieving more precise diagnoses. However, existing methods suffer from problems such as low accuracy and a lack of knowledge supplementation. To address these challenges, this paper proposes MSA K-BERT, a knowledge-enhanced bidirectional encoder representation model that integrates a multi-scale attention (MSA) mechanism to enhance prediction performance while solving critical issues like heterogeneity of embedding spaces and knowledge noise. We systematically validate the reliability of this model on medical text intent classification datasets and compare it with various deep learning models. The research results indicate that MSA K-BERT makes the following key contributions: First, it introduces a knowledge-supported language representation model compatible with BERT, enhancing language representations through the refined injection of knowledge graphs. Second, it adopts a multi-scale attention mechanism to reinforce different feature layers, significantly improving the model’s accuracy and interpretability. Especially in the IMCS-21 dataset, MSA K-BERT achieves precision, recall, and F1 scores of 0.826, 0.794, and 0.810, respectively, all exceeding the current mainstream methods.
Published Version
Join us for a 30 min session where you can share your feedback and ask us any queries you have