This study explored methods for handling missing data in the development of machine learning-based energy benchmarking models, assessing their training time, performance, and variance. Unlike the common assumption of missing completely at random, this study adopted a missing at random (MAR) perspective, which is more appropriate for building data. We compared the inherent missing data handling method of extreme gradient boosting (XGBoost) with the Median, k-nearest neighbors (KNN), and classification and regression trees (CART) methods, alongside Shapley additive explanation (SHAP) method for model interpretability. The findings indicate that, despite its computational demands, the CART method most accurately mirrors the original data distribution, thereby enhancing model performance and stability. The KNN method is effective, while the XGBoost method is viable under computational time constraints. This work highlights the importance of reliable test data for performing accurate evaluations of imputation methods. These results offer guidelines for the selection of imputation methods in model development, contributing to the improved accuracy of energy benchmarking models. The MAR-based approach for missing data analysis holds promise for future research on building energy data, providing crucial insights for accurate energy benchmark model performance assessments.