Abstract
ABSTRACTAutism spectrum disorder (ASD) is a neurological condition that affects an individual's mental development. This research work implements a multimodality input‐based virtual reality (VR)‐enabled attention prediction approach in gaming for children with autism. Initially, the multimodal inputs such as face image, electroencephalogram (EEG) signal, and data are individually processed by both the preprocessing and feature extraction procedures. Subsequently, a hybrid classification model with classifiers such as improved deep convolutional neural network (IDCNN) and long short term memory (LSTM) is utilized in expression detection by concatenating the resultant features obtained from the feature extraction procedure. Here, the conventional deep convolutional neural network (DCNN) approach is improved by a novel block‐knowledge‐based processing with a proposed sine‐hinge loss function. Finally, an improved weighted mutual information process is employed in attention prediction. Moreover, this proposed attention prediction model is analyzed by simulation and experimental analyses. The effectiveness of the proposed model is significantly proved by the experimental results obtained from various analyses.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have