ObjectivesPost-discharge follow-up stands as a critical component of post-diagnosis management, and the constraints of healthcare resources impede comprehensive manual follow-up. However, patients are less cooperative with AI follow-up calls or may even hang up once AI voice robots are perceived. To improve the effectiveness of follow-up, alternative measures should be taken when patients perceive AI voice robots. Therefore, identifying how patients perceive AI voice robots is crucial. This study aims to construct a multimodal identity perception model based on deep learning to identify how patients perceive AI voice robots. MethodsOur dataset includes 2030 response audio recordings and corresponding texts from patients. We conduct comparative experiments and perform an ablation study. The proposed model employs a transfer learning approach, utilizing BERT and TextCNN for text feature extraction, AST and LSTM for audio feature extraction, and self-attention for feature fusion. ResultsOur model demonstrates superior performance against existing baselines, with a precision of 86.67%, an AUC of 84%, and an accuracy of 94.38%. Additionally, a generalization experimentwas conducted using 144 patients’ response audio recordings and corresponding text data from other departments in the hospital, confirming the model’s robustness and effectiveness. ConclusionOur multimodal identity perception model can identify how patients perceive AI voice robots effectively. Identifying how patients perceive AI not only helps to optimize the follow-up process and improve patient cooperation, but also provides support for the evaluation and optimization of AI voice robots.
Read full abstract