Abstract

Convolutional Neural Networks (CNNs) have been widely used for complex image recognition tasks. Due to the highly entangled correlations learned by the latent features in the convolutional kernels of CNNs, deriving human-comprehensible knowledge from CNNs has been proven difficult. As such, reasoning from relationships between kernels has been limited, resulting in little knowledge transfer from one task to another related task learned by CNNs. This paper introduces a neural-symbolic approach for providing semantically meaningful explanations to CNNs using logical rules and a shared conceptual representation space to capture the meaning of the knowledge learned. The validity of the proposed approach is demonstrated using benchmark chest x-rays of two respiratory conditions: pleural effusion and COVID-19. Our results show empirically that symbolic rules can be associated with semantically meaningful explanations obtained from different but related CNN models, even in domains requiring specialised knowledge such as medical imaging. This work is expected to aid the analysis of black-box CNNs by associating the predictions obtained from the CNNs with clinical research findings.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call