Abstract

Support estimation (SE) of a sparse signal refers to finding the location indices of the nonzero elements in a sparse representation. Most of the traditional approaches dealing with SE problems are iterative algorithms based on greedy methods or optimization techniques. Indeed, a vast majority of them use sparse signal recovery (SR) techniques to obtain support sets instead of directly mapping the nonzero locations from denser measurements (e.g., compressively sensed measurements). This study proposes a novel approach for learning such a mapping from a training set. To accomplish this objective, the convolutional sparse support estimator networks (CSENs), each with a compact configuration, are designed. The proposed CSEN can be a crucial tool for the following scenarios: 1) real-time and low-cost SE can be applied in any mobile and low-power edge device for anomaly localization, simultaneous face recognition, and so on and 2) CSEN's output can directly be used as "prior information," which improves the performance of sparse SR algorithms. The results over the benchmark datasets show that state-of-the-art performance levels can be achieved by the proposed approach with a significantly reduced computational complexity.

Highlights

  • S PARSE representation or sparse coding (SC) denotes representing a signal as a linear combination of only a small subset of a predefined set of waveforms

  • If one wants to compare the performance in terms of F1-Score instead of F2-Score, CSENbased classification still achieves a comparable performance with SVM

  • Sparse support estimators that work based on traditional sparse signal recovery (SR) techniques suffer from computational complexity and noise

Read more

Summary

Introduction

S PARSE representation or sparse coding (SC) denotes representing a signal as a linear combination of only a small subset of a predefined set of waveforms. Compressive sensing (CS) [1], [2] can be seen as a special form of SC, while a signal, s ∈ Rd that has a sparse representation, x ∈ Rn in a dictionary or basis ∈ Rd×n, can be acquired in a compressed manner using a linear dimensional reductional matrix, A ∈ Rm×d. This signal can be Manuscript received April 2, 2020; revised October 23, 2020 and April 16, 2021; accepted June 25, 2021. This is the main principle of most greedy algorithms [8], [9]

Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call