Abstract
In sparse representation problem, there is always interest to reduce the solution space by introducing additional constraints. This can lead to efficient application-specific algorithms. Despite known advantages of sparsity and non-negativity for image data representation, limited studies have addressed these characteristics simultaneously, due to the challenges involved. In this paper, we propose a novel inexpensive sparse non-negative reconstruction method. We utilise a non-negativity penalty term within a convex function while imposing sparsity at the same time. Our method, termed as SnSA (smooth non-negative sparse approximation) applies a novel thresholding strategy on the sparse coefficients during the minimisation of the proposed convex function. The main advantage of SnSA algorithm is that hard zeroing the negative samples which leads to unstable and non-optimal sparse solution is avoided. Instead, a differentiable smoothing function is proposed that allows gradual suppression of negative samples leading to a sparse non-negative solution. This way, the algorithm is driven towards a solution with a balance in maximising the sparsity and minimising the reconstruction error. Our numerical and experimental results on both synthetic signals and well-established face and handwritten image databases, indicate higher classification performance of the proposed method compared to the state-of-the-art techniques.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.