Abstract

Hopfield Neural Networks (HNNs) are recurrent neural networks used to implement associative memory. They can be applied to pattern recognition, optimization, or image segmentation. However, sometimes it is not easy to provide the users with good explanations about the results obtained with them due to mainly the large number of changes in the state of neurons (and their weights) produced during a problem of machine learning. There are currently limited techniques to visualize, verbalize, or abstract HNNs. This paper outlines how we can construct automatic video-generation systems to explain its execution. This work constitutes a novel approach to obtain explainable artificial intelligence systems in general and HNNs in particular building on the theory of data-to-text systems and software visualization approaches. We present a complete methodology to build these kinds of systems. Software architecture is also designed, implemented, and tested. Technical details about the implementation are also detailed and explained. We apply our approach to creating a complete explainer video about the execution of HNNs on a small recognition problem. Finally, several aspects of the videos generated are evaluated (quality, content, motivation and design/presentation).

Highlights

  • Artificial Intelligence (AI) and ExplainabilityPresently, transparency is one of the most critical words around the world [1,2]

  • For all the above considerations, we propose Automatic Video Generation (AVG) as a new interactive NL Technology for Explainable AI whose objective is to automatically generate step-by-step explainer videos to improve our understanding of complex phenomena dealing with a large amount of information, Hopfield Neural Networks (HNNs) in our case

  • The questionnaires will be answer by a group of nine experts in AI and higher education after that they will have visualized the explainer video

Read more

Summary

Introduction

Transparency is one of the most critical words around the world [1,2]. We perceive it as the quality of seeing or understanding the others’ actions, implying openness, communication, and accountability [3]. AI must be accessible for all human users for expert ones It could be a severe problem when an unexpected decision needs to be clarified in critical contexts: medical domain/health-care, judicial systems, banking/financial domain, bioinformatics, automobile industry, marketing, election campaigns, precision agriculture, military expert systems, security systems and education [18].

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.