Background: In motor imagery brain–computer interface (MI-BCI) research, electroencephalogram (EEG) signals are complex and nonlinear. This complexity and nonlinearity render signal processing and classification challenging when employing traditional linear methods. Information entropy, with its intrinsic nonlinear characteristics, effectively captures the dynamic behavior of EEG signals, thereby addressing the limitations of traditional methods in capturing linear features. However, the multitude of entropy types leads to unclear application scenarios, with a lack of systematic descriptions. Methods: This study conducted a review of 63 high-quality research articles focused on the application of entropy in MI-BCI, published between 2019 and 2023. It summarizes the names, functions, and application scopes of 13 commonly used entropy measures. Results: The findings indicate that sample entropy (16.3%), Shannon entropy (13%), fuzzy entropy (12%), permutation entropy (9.8%), and approximate entropy (7.6%) are the most frequently utilized entropy features in MI-BCI. The majority of studies employ a single entropy feature (79.7%), with dual entropy (9.4%) and triple entropy (4.7%) being the most prevalent combinations in multiple entropy applications. The incorporation of entropy features can significantly enhance pattern classification accuracy (by 8–10%). Most studies (67%) utilize public datasets for classification verification, while a minority design and conduct experiments (28%), and only 5% combine both methods. Conclusions: Future research should delve into the effects of various entropy features on specific problems to clarify their application scenarios. As research methodologies continue to evolve and advance, entropy features are poised to play a significant role in a wide array of fields and contexts.
Read full abstract