Abstract
Explainability remains the holy grail in designing the next-generation pervasive artificial intelligence (AI) systems. Current neural network based AI design methods do not naturally lend themselves to reasoning for a decision making process from the input data. A primary reason for this is the overwhelming arithmetic complexity.Built on the foundations of propositional logic and game theory, the principles of learning automata are increasingly gaining momentum for AI hardware design. The lean logic based processing has been demonstrated with significant advantages of energy efficiency and performance. The hierarchical logic underpinning can also potentially provide opportunities for by-design explainable and dependable AI hardware. In this paper, we study explainability and dependability using reachability analysis in two simulation environments. Firstly, we use a behavioral SystemC model to analyze the different state transitions. Secondly, we carry out illustrative fault injection campaigns in a low-level SystemC environment to study how reachability is affected in the presence of hardware stuck-at 1 faults. Our analysis provides the first insights into explainable decision models and demonstrates dependability advantages of learning automata driven AI hardware design.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.