Abstract

Smart manufacturing uses emerging deep learning models, and particularly Convolutional Neural Networks (CNNs) and Generative Adversarial Networks (GANs), for different industrial diagnostics tasks, e.g., classification, detection, recognition, prediction, synthetic data generation, security, etc., on the basis of image data. In spite of being efficient for these objectives, the majority of current deep learning models lack interpretability and explainability. They can discover features hidden within input data together with their mutual co-occurrence. However, they are weak at discovering and making explicit hidden causalities between the features, which could be the reason behind the particular diagnoses. In this paper, we suggest Causality-Aware CNNs (CA-CNNs) and Causality-Aware GANs (CA-GANs) to address the issue of learning hidden causalities within images. The core architecture includes an additional layer of neurons (after the last convolution-pooling and just before the dense layers), which learns pairwise conditional probabilities (aka causality estimates) for the features. Computations for these neurons are driven by the adaptive Lehmer mean function. Learned causalities are merged with the features during flattening and (via fully connected layers) influence the classification outcomes. Such causality estimates can be done for the mixed inputs where images are combined with other data. We argue that CA-CNNs not only improve the classification performance of normal CNNs but also open additional opportunities for the explainability of the models’ outcomes. We consider as an additional advantage for CA-CNNs (if used as a discriminator within CA-GANs) the possibility to generate realistically looking images with respect to the causalities.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call