Speech impairments, resulting from brain injuries, mental disorders, or vocal abuse, substantially affect an individual’s quality of life and can lead to social isolation. Brain–Computer Interfaces (BCIs), particularly those based on EEG, offer a promising support mechanism by harnessing brain signals. Owing to their clinical efficacy, cost-effective EEG devices, and expanding applications in the medical and social spheres, their usage has surged. This study introduces an ensemble-based feature engineering mechanism to pinpoint the optimal brain rhythm, channel subset, and feature set for accurately predicting imagined words from EEG signals via machine learning models. Leveraging the 2020 International BCI competition dataset, we employed bandpass filtering, channel wrapping, and ranking methods were applied to discern suitable brain rhythms and features associated with imagined speech. Subsequent application of kernel-based principal component analysis enabled us to compress the feature space dimensionality. We then trained various machine learning models, among which the kNN model excelled, achieving an average accuracy of 73% in a 10-fold cross-validation scheme ,surpassing 18% higher than the existing literature. The Gamma rhythm was identified as the most predictive of imagined speech from EEG brain signals. These advancements herald a new era of more precise and effective BCIs, poised to significantly improve the lives of those with speech impairments.