Abstract

This presentation explores an end-to-end AI workflow by implementing and validating an explainable single-class classification algorithm. The neural network accurately detects cracks in images of concrete, while returning a heatmap of anomalies. A semi-supervised one-class learning approach is taken, meaning that the network is trained on data consisting only of examples of images without anomalies. Despite training only on samples of normal scenes, the model learns how to distinguish anomalies. One-class learning offers many advantages for anomaly detection problems such as: 1. Representations of anomalies can be scarce 2. Anomalies can represent expensive or catastrophic outcomes 3. There can be many kinds of anomalies, and these can change over the lifetime of the model. Describing what "good" looks like is often more feasible than providing data that represents all possible anomalies in real world settings A crucial part of detection is for a human observer to be able to understand why a trained network classifies images as anomalies. Explainable classification supplements the class prediction with information that justifies how the neural network reached its decision. The presentation further explores how to deploy the algorithm onto a portable device, so that the classification could happen, in real-time, on-prem.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call