Abstract

Considerable endeavors have been expended towards enhancing the representation performance for Hyperspectral Anomaly Detection (HAD) through physical model-based methods and recent deep learning-based approaches. Of these methods, the Low-Rank Representation (LRR) model is widely adopted for its formidable separation capabilities for background and target features, however, its practical applications are limited due to the reliance on manual parameter selection and subpar generalization performance. To this end, this paper presents a new HAD baseline network, referred to as LRR-Net, which synergizes the LRR model with deep learning techniques. LRR-Net leverages the alternating direction method of multipliers (ADMM) optimizer to solve the LRR model efficiently and incorporates the solution as prior knowledge into the deep network to guide the optimization of parameters. Moreover, LRR-Net transforms the regularized parameters into trainable parameters of the deep neural network, thus alleviating the need for manual parameter tuning. Additionally, this paper proposes a sparse neural network embedding to demonstrate the scalability of the LRR-Net framework. Empirical evaluations on eight distinct datasets illustrate the efficacy and superiority of the proposed approach compared to state-of-the-art methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.