Recent progress in deep learning has significantly impacted materials science, leading to accelerated material discovery and innovation. ElemNet, a deep neural network model that predicts formation energy from elemental compositions, exemplifies the application of deep learning techniques in this field. However, the “black-box” nature of deep learning models often raises concerns about their interpretability and reliability. In this study, we propose XElemNet to explore the interpretability of ElemNet by applying a series of explainable artificial intelligence (XAI) techniques, focusing on post-hoc analysis and model transparency. The experiments with artificial binary datasets reveal ElemNet’s effectiveness in predicting convex hulls of element-pair systems across periodic table groups, indicating its capability to effectively discern elemental interactions in most cases. Additionally, feature importance analysis within ElemNet highlights alignment with chemical properties of elements such as reactivity and electronegativity. XElemNet provides insights into the strengths and limitations of ElemNet and offers a potential pathway for explaining other deep learning models in materials science.