Abstract

Accurate and automated detection of anomalous samples in an image dataset can be accomplished with a probabilistic model. Such images have heterogeneous complexity, however, and a probabilistic model tends to overlook simply shaped objects with small anomalies. The reason is that a probabilistic model assigns undesirable lower likelihoods to complexly shaped objects, which are nevertheless consistent with the current set standards. This difficulty is critical, especially for a defect detection task, where the anomaly can be a small scratch or grime. To overcome this difficulty, we propose an unregularized score for deep generative models (DGMs). We found that the regularization terms of the DGMs considerably influence the anomaly score depending on the complexity of the samples. By removing these terms, we obtain an unregularized score, which we evaluated on toy datasets, two in-house manufacturing datasets, and on open manufacturing and medical datasets. The empirical results demonstrate that the unregularized score is robust to the apparent complexity of given samples and detects anomalies selectively.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.