Ensemble regression methods are widely used to improve prediction accuracy by combining multiple regression models, especially when dealing with continuous numerical targets. However, most ensemble voting regressors use equal weights for each base model's predictions, which can limit their effectiveness, particularly when there is no specific domain knowledge to guide the weighting. This uniform weighting approach doesn't consider that some models may perform better than others on different datasets, leaving room for improvement in optimizing ensemble performance. To overcome this limitation, we propose the RRMSE (Relative Root Mean Square Error) Voting Regressor, a new ensemble regression technique that assigns weights to each base model based on their relative error rates. By using an RRMSE-based weighting function, our method gives more importance to models that demonstrate higher accuracy, thereby enhancing the overall prediction quality. We tested the RRMSE Voting Regressor on six popular regression datasets and compared its performance with several state-of-the-art ensemble regression algorithms. The results show that the RRMSE Voting Regressor consistently achieves lower prediction errors than existing methods across all tested datasets. This improvement highlights the effectiveness of using relative error metrics for weighting in ensemble models. Our approach not only fills a gap in current ensemble regression techniques but also provides a reliable and adaptable method for boosting prediction performance in various machine learning tasks. By leveraging the strengths of individual models through smart weighting, the RRMSE Voting Regressor offers a significant advancement in the field of ensemble learning.
Read full abstract