Abstract

Anomalies in data have traditionally been considered as nuisances whose presence, if ignored, can bring detrimental effects on the output of many data processing tasks. Nevertheless, in many situations anomalies correspond to events of interest and as such should be promptly identified before their presence is masked by the data preprocessing schemes being used to reduce the complexity of the main data processing task. This work develops a robust dictionary learning algorithm that exploits the notions of sparsity and local geometry of the data to identify anomalies while constructing sparse representations for the data. Sparsity is used to model the presence of anomalies in a dataset, and local geometry is exploited to better qualify a datum as an anomaly. The robust dictionary learning problem is cast as a regularized least-squares problem where sparsity-inducing and Laplacian regularization terms are used. Efficient iterative solvers based on block-coordinate descent and proximal gradient are developed to tackle the resulting joint dictionary learning and anomaly detection problems. The proposed framework is extended to address variations of classical dictionary learning and matrix factorization problems. Numerical tests on real datasets with artificial and real anomalies are used to illustrate the performance of the proposed algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call