Abstract

Reed–Xiaoli detector (RXD) is recognized as the benchmark algorithm for image anomaly detection; however, it presents known limitations, namely the dependence over the image following a multivariate Gaussian model, the estimation and inversion of a high-dimensional covariance matrix, and the inability to effectively include spatial awareness in its evaluation. In this work, a novel graph-based solution to the image anomaly detection problem is proposed; leveraging the graph Fourier transform, we are able to overcome some of RXD’s limitations while reducing computational cost at the same time. Tests over both hyperspectral and medical images, using both synthetic and real anomalies, prove the proposed technique is able to obtain significant gains over performance by other algorithms in the state of the art.

Highlights

  • Detection is the task of spotting items that do not conform to the expected pattern of the data

  • The first one is hyperspectral remote sensing, which is one of the most common use cases for anomaly detection where the use of Reed–Xiaoli detector (RXD) is widely validated [49]; the second one is the domain of 3D volumetric segmentation of tumoral masses on positron emission tomography (PET) images, where we successfully explored the use of RXD in the past [10,63,65]

  • The results will be compared with those yielded by classic RXD, random-selection-based anomaly detector (RSAD), and whitening spatial correlation filtering (WSCF)

Read more

Summary

Introduction

Detection is the task of spotting items that do not conform to the expected pattern of the data. Deep learning methods have found increasingly wide adoption for many other tasks in image processing and computer vision [15,35,71], their application to anomaly detection— especially on hyperspectral and medical imaging—is stifled by multiple factors: First, pixels have to be considered anomalous according to intra-image metrics which are difficult to capture in a dataset; second, the amount of data required to train the models is not often available in these contexts [11,44] For these reasons, classical unsupervised approaches are preferable instead. The typical strategy is to extract knowledge of the background statistics from the data and measure the deviation of each examined pixel from the learned knowledge according to some affinity function

Methods
Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.