Abstract

Stroke is a major cause of mortality and disability worldwide from which one in four people are in danger of incurring in their lifetime. The pre-hospital stroke assessment plays a vital role in identifying stroke patients accurately to accelerate further examination and treatment in hospitals. Accordingly, the National Institutes of Health Stroke Scale (NIHSS), Cincinnati Pre-hospital Stroke Scale (CPSS) and Face Arm Speed Time (F.A.S.T.) are globally known tests for stroke assessment. However, the validity of these tests is skeptical in the absence of neurologists and access to healthcare may be limited. Therefore, in this study, we propose a motion-aware and multi-attention fusion network (MAMAF-Net) that can detect stroke from multiple examination videos. Contrary to other studies on stroke detection from video analysis, our study for the first time collected a dataset encapsulating transient ischemic attack (TIA), stroke, and healthy controls, and proposes an end-to-end solution using multiple video recordings of each subject. The proposed MAMAF-Net consists of motion-aware modules to sense the mobility of patients, attention modules to fuse the multi-input video data, and 3D convolutional layers to perform diagnosis from the attention-based extracted features. Experimental results over the collected Stroke-data dataset show that the proposed MAMAF-Net achieves a successful detection of stroke with the highest levels of 93.62% sensitivity, 91.19% F1-Score, and 0.7472 Kappa measure in addition to 3.92% increase in the AUC score compared to state-of-the-art deep learning models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call