Abstract

Hunger is a physiological state that arises from complex interactions of multiple factors, including higher brain center control. We purposed to develop an accurate and efficient machine-learning model for the automated detection of hunger using EEG signals. We prospectively acquired 14-channel EEG (sampling frequency 128 Hz) from 43 and 48 fasted and post-prandial healthy subjects (hungry vs. control groups, respectively) using the EMOTIV EPOC+ mobile brain cap system. To augment the hunger response, fasted subjects were also shown video images of food during EEG recording. The EEG signals were divided into 15-second segments. 877 and 852 participants/subjects were in the hungry and control groups. We created a novel handcrafted architecture—decomposed shifted window (DSWIN)—that combined swin patch division with tunable Q-factor wavelet transform-based signal decomposition for multilevel feature extraction of EEG signals. Textural and statistical features were extracted from the multiple patches and decomposed signals using a novel penta pattern-based extractor and statistical moments, respectively, and then merged. Iterative neighborhood component analysis (INCA) and iterative ReliefF (IRF) were applied. Twenty-eight selected feature vectors were generated, which were then fed to a shallow k-nearest neighbors (kNN) classifier to calculate channel-wise prediction vectors. From the 28 channel-wise prediction vectors, another 26 modes of function-based voted results were calculated using iterative hard majority voting, and the best overall model result was selected using a greedy algorithm. Our model attained 99.54% and 82.71% binary classification accuracies of hungry status vs. control using 10-fold and leave-one-subject-out cross-validations, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call