Abstract

This paper explores the use of paraconsistent analysis for assessing neural networks from an explainable AI perspective. This is an early exploration paper aiming to understand whether paraconsistent analysis can be applied for understanding neural networks and whether it is worth further develop the subject in future research. The answers to these two questions are affirmative. Paraconsistent analysis provides insightful prediction visualisation through a mature formal framework that provides proper support for reasoning. The significant potential envisioned is the that paraconsistent analysis will be used for guiding neural network development projects, despite the performance issues. This paper provides two explorations. The first was a baseline experiment based on MNIST for establishing the link between paraconsistency and neural networks. The second experiment aimed to detect violence in audio files to verify whether the paraconsistent framework scales to industry level problems. The conclusion shown by this early assessment is that further research on this subject is worthful, and may eventually result in a significant contribution to the field.

Highlights

  • In the last decade, the success of artificial intelligence (AI) applications, namely, applications that use machine learning (ML) and/or deep learning (DL) models, has been resounding, as they offer broad benefits and are applied in several areas

  • The features and relations established by such a model are not accessible, and usually are very different from what would be expected from a human perspective, which leads to unpredictability in how a model responds to certain situations

  • As this paper is concerned with binary classification, the Modified National Institute of Standards and Technology (MNIST) is reduced to two classes for proper support

Read more

Summary

Introduction

The success of artificial intelligence (AI) applications, namely, applications that use machine learning (ML) and/or deep learning (DL) models, has been resounding, as they offer broad benefits and are applied in several areas. These applications are not able to logically explain their autonomous decisions and actions to human users. A method used to explain machine learning or deep learning models’ outputs is called explainable AI (XAI) [9]. It is necessary to use a system based on a human perspective to explain how it works

Objectives
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call