Abstract

Building energy management systems (BEMS) have somewhat standardised building energy consumption data formats, thereby enhancing their compatibility with the relevant ML-based prediction algorithms. However, data shortage remains a significant limiter to accurate building energy consumption prediction. Against this backdrop, it would seem logical to believe that a potentially viable remedy would be to rationalise the features extracted from available data, to guarantee better representation of building energy consumption. It is envisaged that this approach will help address the challenges of redundant and unrelated information clouding the features, which may undermine the performance of current ML-based methods. Currently, no research has systematically investigated the application and/or impact of feature selection on building energy consumption prediction. Hence, the overarching purpose of this study is to propose a practical framework for building energy consumption prediction, based on feature selection methods that would alleviate problems caused by indiscriminate extension of features when dealing with insufficient data. Time information and delay effects of meteorological data were used as initial input features, after which feature selection methods are proposed. The robustness of the proposed approach was then tested using prevalent ML methods for 1, 12 and 24 steps-ahead energy consumption prediction for three buildings. The results indicated that multivariate wrapper methods showed the best performance in all scenarios and significantly outperformed all other methods. For George Begg building, the RMSE of 1, 12, and 24-step ahead prediction is improved by 44.6%, 54.6% and 53.1%, respectively; while for Learning Commons, an RMSE improvement of 44.01%, 15.56%, and 20.39%, were respectively recorded. Time information and lagged features of weather conditions accounted for most of the selected features. Prediction performance was reasonably constant, irrespective of variations in data sizes which implied that as little as 3 months of data size ( there were no distinguishable differences in the prediction performance using 3 months, 6 months or 1 year subsets) was sufficient for the feature selection task.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call