Abstract
Sleep stages classification one of the essential factors concerning sleep disorder diagnoses, which can contribute to many functional disease treatments or prevent the primary cognitive risks in daily activities. In this study, A novel method of mapping EEG signals to music is proposed to classify sleep stages. A total of 4.752 selected 1-min sleep records extracted from the capsleep database are applied as the statistical population for this assessment. In this process, first, the tempo and scale parameters are extracted from the signal according to the rules of music, and next by applying them and changing the dominant frequency of the pre-processed single-channel EEG signal, a sequence of musical notes is produced. A total of 19 features are extracted from the sequence of notes and fed into feature reduction algorithms; the selected features are applied to a two-stage classification structure: 1) the classification of 5 classes (merging S1 and REM-S2-S3-S4-W) is made with an accuracy of 89.5% (Cap sleep database), 85.9% (Sleep-EDF database), 86.5% (Sleep-EDF expanded database), and 2) the classification of 2 classes (S1 vs. REM) is made with an accuracy of 90.1% (Cap sleep database),88.9% (Sleep-EDF database), 90.1% (Sleep-EDF expanded database). The overall percentage of correct classification for 6 sleep stages are 88.13%, 84.3% and 86.1% for those databases, respectively. The other objective of this study is to present a new single-channel EEG sonification method, The classification accuracy obtained is higher or comparable to contemporary methods. This shows the efficiency of our proposed method.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have