Abstract
To build an easy-to-use AI and ML, it is crucial to gain user’s trust. Trust comes from understanding the reasoning behind an AI system’s conclusions and results. The recent research efforts on Explainable AI (XAI) reflect the importance of explainability in responding to the criticism of “black box” type of AI. Neural Logic Networks (NLN) is a research to embed logic reasoning (being binary or fuzzy) to connectionist models having humans’ domain knowledge taken into consideration. The reasoning carried out on such network structures allows possible interpretation beyond binary logic. This article intends to discuss the potential contribution of NLN approach in making reasoning more explainable.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.