Abstract
This study used explainable machine learning (XML), a new branch of Machine Learning (ML), to elucidate how ML models make predictions. Three tree-based regression models, Decision Tree (DT), Random Forest (RF), and Extreme Gradient Boost (XGB), were used to predict the normalized mean (Cp,mean), fluctuating (Cp,rms), minimum (Cp,min), and maximum (Cp,max) external wind pressure coefficients of a low-rise building with fixed dimensions in urban-like settings for several wind incidence angles. Two types of XML were used — first, an intrinsic explainable method, which relies on the DT structure to explain the inner workings of the model, and second, SHAP (SHapley Additive exPlanations), a post-hoc explanation technique used particularly for the structurally complex XGB. The intrinsic explainable method proved incapable of explaining the deep tree structure of the DT, but SHAP provided valuable insights by revealing various degrees of positive and negative contributions of certain geometric parameters, the wind incidence angle, and the density of buildings that surround a low-rise building. SHAP also illustrated the relationships between the above factors and wind pressure, and its explanations were in line with what is generally accepted in wind engineering, thus confirming the causality of the ML model's predictions.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Journal of Wind Engineering and Industrial Aerodynamics
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.