Abstract
Deep learning is being very successful in supporting humans in the interpretation of complex data (such as images and text) for critical decision tasks. However, it still remains difficult for human experts to understand how such results are achieved, due to the “black box” nature of the deep models used. In high-stake decision making scenarios such as the interpretation of medical imaging for diagnostics, such a lack of transparency still hinders the adoption of these techniques in practice. In this position paper we present a conceptual methodology for the design of a neuro-symbolic cycle to address the need for explainability and confidence (including trust) of deep learning models when used to support human experts in high-stake decision making, and we discuss challenges and opportunities in the implementation of such cycle as well as its adoption in real world scenarios. We elaborate on the need to leverage the potential of hybrid artificial intelligence combining neural learning and symbolic reasoning in a human-centered approach to explainability. We advocate that the phases of such a cycle should include i) the extraction of knowledge from a trained network to represent and encode its behaviour, ii) the validation of the extracted knowledge through commonsense and domain knowledge, iii) the generation of explanations for human experts, iv) the ability to map human feedback into the validated representation from i), and v) the injection of some of this knowledge in a non-trained network to enable knowledge-informed representation learning. The holistic combination of causality, expressive logical inference, and representation learning, would result in a seamless integration of (neural) learning and (cognitive) reasoning that makes it possible to retain access to the inherently explainable symbolic representation without losing the power of the deep representation. The involvement of human experts in the design, validation and knowledge injection process is crucial, as the conceptual approach paves the way for a new human–ai paradigm where the human role goes beyond that of labeling data, towards the validation of neural-cognitive knowledge and processes.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have