In sparse representation problem, there is always interest to reduce the solution space by introducing additional constraints. This can lead to efficient application-specific algorithms. Despite known advantages of sparsity and non-negativity for image data representation, limited studies have addressed these characteristics simultaneously, due to the challenges involved. In this paper, we propose a novel inexpensive sparse non-negative reconstruction method. We utilise a non-negativity penalty term within a convex function while imposing sparsity at the same time. Our method, termed as SnSA (smooth non-negative sparse approximation) applies a novel thresholding strategy on the sparse coefficients during the minimisation of the proposed convex function. The main advantage of SnSA algorithm is that hard zeroing the negative samples which leads to unstable and non-optimal sparse solution is avoided. Instead, a differentiable smoothing function is proposed that allows gradual suppression of negative samples leading to a sparse non-negative solution. This way, the algorithm is driven towards a solution with a balance in maximising the sparsity and minimising the reconstruction error. Our numerical and experimental results on both synthetic signals and well-established face and handwritten image databases, indicate higher classification performance of the proposed method compared to the state-of-the-art techniques.
Read full abstract