Abstract

Traditional cloud computing of raw Electroencephalogram (EEG) data, particularly for continuous monitoring use-cases, consumes precious network resources and contributes to delay. Motivated by the paradigm shift of edge computing and Internet of Things (IoT) for continuous monitoring, we focus on this paper on the first step to carry out EEG edge analytics at the last frontier (i.e., the ultra-edge) of our considered cyber-physical system for ensuring users’ convenience and privacy. To overcome challenges due to computational and energy resource constraints of IoT devices (e.g., EEG headbands/headsets), in this paper, we envision a smart, lightweight model, referred to as Logic-in-Headbands based Edge Analytics (LiHEA), which can be seamlessly incorporated with the consumer-grade EEG headsets to reduce delay and bandwidth consumption. By systematically investigating various traditional machine and deep learning models, we identify and select the best model for our envisioned LiHEA. We consider a use-case for detecting confusion, representing levels of distraction, during online course delivery which has become pervasive during the novel coronavirus (COVID-19) pandemic. We apply a unique feature selection technique to find out which features are triggered with confusion where delta waves, attention, and theta waves were announced as the three most important features. Among various traditional machine and deep learning models, our customized random forest model demonstrated the highest accuracy of 90%. Since the dataset size might have impacted the performance of deep learning-based approaches, we further apply the deep convolutional generative adversarial network (DCGAN) to generate synthetic traces with representative samples of the original EEG data, and thereby enhance the variation in the data. While the performances of the deep learning models significantly increase after the data augmentation, they still cannot outperform the random forest model. Furthermore, computational complexity analysis is performed for the three best-performing algorithms, and random forest emerges as the most viable model for our envisioned LiHEA.

Highlights

  • The rapid proliferation of the Internet of Things (IoT) has revolutionized the healthcare industry with the adoption of a plethora of technologies ranging from wearable devices to the early warning or monitoring systems [1]

  • In this paper, we systematically investigate various traditional machine and deep learning techniques (i.e.., random forest, K-nearest neighbor (KNN), support vector machine (SVM), logistic regression, Deep Neural Network (DNN), and convolutional neural network (CNN)) to identify the candidate lightweight AI inference models required for the Logic-in-Headbands based Edge Analytics (LiHEA) framework

  • DATASET PREPARATION To train the AI models, we consider the use-case of EEG data analytics for detecting the confusion of an individual since this scenario may largely impact mental health assessment and concentration levels, while attending online educational courses that proliferated during the novel coronavirus (COVID-19) pandemic

Read more

Summary

INTRODUCTION

The rapid proliferation of the Internet of Things (IoT) has revolutionized the healthcare industry with the adoption of a plethora of technologies ranging from wearable devices to the early warning or monitoring systems [1]. The considered AI-based models can be deployed on various platforms for a plethora of tasks, ranging from smart device control, words prediction for the speech-impaired users, assessing concentration level while taking part in online courses, continuous and non-intrusive monitoring of post-surgical patients while rehabilitating at home, non-intrusive sleep monitoring, and so forth While this ultra-edge computing concept is appealing, it requires high computational and energy resources. Our investigation reveals that the unavailability of a large EEG dataset could be a performance bottleneck for EEG computing using deep learning techniques To remedy this issue, we employ a deep convolutional general adversarial network (DCGAN)-based data augmentation technique to generate synthetic EEG traces to train the deep learning models to conduct comparative analytics with the machine learning counterpart with high accuracy, i.e., random forest.

RELATED WORK
DATASET PREPARATION
PERFORMANCE EVALUATION
CLASSIFICATION AND VALIDATION
D2 D3 D4
Fold CV SIV SSV
EVALUATING LIGHTWEIGHT PROPERTY OF THE MODELS
COMPLEXITY ANALYSIS OF THE TOP CANDIDATE AI INFERENCE MODEL FOR LIHEA
INVESTIGATING DCGAN-ASSISTED EEG DATA AUGMENTATION
Findings
VIII. CONCLUSION AND FUTURE WORK
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call