Abstract

Building prediction models with the right balance between performance and interpretability is currently a great challenge in machine learning. A large number of recent studies have focused on either building intrinsically interpretable models or developing general explainers for blackbox models. Although these methods have been widely adopted, their interpretability or explanations are not always useful because of the lack of contexts considered in training machine learning models and producing explanations. This paper aims to tackle this significant challenge by developing a context-aware evolutionary learning algorithm (CELA) for building interpretable prediction models. A new context extraction method based on unsupervised self-structuring learning algorithms is developed to treat data in contexts. The proposed algorithm overcomes the limitations of existing evolutionary learning methods in handling a large number of features and large datasets by training specialised interpretable models based on the automatically extracted contexts. The new algorithm has been tested on complex regression datasets and a real-world building energy prediction task. The results suggest CELA can outperform well-known interpretable machine learning (IML) algorithms, the state-of-the-art evolutionary algorithm, and can produce predictions much closer to the results of blackbox algorithms such as XGBoost and artificial neural networks than the compared IML methods. Further analyses also demonstrate that the CELA’s prediction models are smaller and easier to interpret than those obtained by the evolutionary learning algorithm without context awareness.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call