Abstract

AbstractThe sparse prior has been widely adopted to establish data models for numerous applications. In this context, most of them are based on one of three foundational paradigms: the conventional sparse representation, the convolutional sparse representation, and the multi‐layer convolutional sparse representation. When the data morphology has been adequately addressed, a sparse representation can be obtained by solving the sparse coding problem specified by the data model. This article presents a comprehensive overview of these three models and their corresponding sparse coding problems and demonstrates that they can be solved using convex and non‐convex optimization approaches. When the data morphology is not known or cannot be analyzed, it must be learned from training data, thereby formulating dictionary learning problems. This article addresses two different dictionary learning paradigms. In an unsupervised scenario, dictionary learning involves the alternating or joint resolution of sparse coding and dictionary updating. Another option is to create a recurrent neural network by unrolling algorithms designed to solve sparse coding problems. These networks can then be used in a supervised learning setting to facilitate the training of dictionaries via forward‐backward optimization. This article lists numerous applications in various domains and outlines several directions for future research related to the sparse prior.This article is categorized under: Statistical Learning and Exploratory Methods of the Data Sciences > Modeling Methods Statistical and Graphical Methods of Data Analysis > Modeling Methods and Algorithms Statistical Models > Nonlinear Models

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.