Abstract

Explainability is a key aspect of machine learning, necessary for ensuring transparency and trust in decision-making processes. As machine learning models become more complex, the integration of neural and symbolic approaches has emerged as a promising solution to the explainability problem. One effective solution involves using search techniques to extract rules from trained deep neural networks by examining weight and bias values and calculating their correlation with outputs. This article proposes incorporating cosine similarity in this process to narrow down the search space and identify the critical path connecting inputs to final results. Additionally, the integration of first-order logic (FOL) is suggested to provide a more comprehensive and interpretable understanding of the decision-making process. By leveraging cosine similarity and FOL, an innovative algorithm capable of extracting and explaining rule patterns learned by a feedforward trained neural network was developed and tested in two use cases, demonstrating its effectiveness in providing insights into model behavior.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call