Automation increasingly shapes modern society, requiring artificial intelligence (AI) systems to not only perform complex tasks but also provide clear, actionable explanations of their decisions, especially in high-stakes domains. However, most contemporary AI systems struggle to explain their runtime operations in specific instances, limiting their applicability in contexts demanding stringent outcome justification. Existing approaches have attempted to address this challenge but often fall short in terms of contextual relevance, human cognitive alignment, or scalability. This paper introduces System-of-Systems Machine Learning (SoS-ML) as a novel framework to advance explainable artificial intelligence (XAI) by addressing the limitations of current methods. Drawing from insights in philosophy, cognitive science, and social sciences, SoS-ML seeks to integrate human-like reasoning processes into AI, framing explanations as contextual inferences and justifications. The research demonstrates how SoS-ML addresses key challenges in XAI, such as enhancing explanation accuracy and aligning AI reasoning with human cognition. By leveraging a multi-agent, modular design, SoS-ML encourages collaboration among machine learning models, leading to more transparent, context-aware systems. The framework’s ability to generalize across domains is demonstrated through experiments on the Pima Indian Diabetes dataset and pie chart image-to-text interpretation, showcasing its transformative potential in improving both model accuracy and explainability. The findings emphasize SoS-ML’s role in advancing responsible AI, particularly in high-stakes environments where interpretability and social accountability are paramount.
Read full abstract