Abstract

Deep dictionary learning (DDL) can mine deeper representations of data more effectively than single-layer dictionary learning. However, existing DDL methods with specific sparse regularizers lead to designated deep sparse representations. Existing DDL optimization methods have limited feature extraction capability because their nonlinear functions must satisfy the condition that the functions are invertible. This paper presents a new DDL model with a learned sparsity constraint and a noninvertible soft-thresholding (ST) function, where the learned sparsity constraint can obtain a data-driven sparse representation and the ST (that produces a sparse output as a nonlinear function) can enhance its the feature extraction capability. To solve the optimization problem efficiently with the noninvertible and learned sparsity constraints, we propose to employ a lifted proximal operator machine to transform the proposed DDL problem into a series of subproblems that include sparsity-regularized and convex minimizations. For the sparsity-regularized minimization, we derive the parameterized proximal operator of the sparse regularizer, which is considered as the activation function used to construct the network. The parameters of the activation function are trained by backpropagation, thus obtaining the proximal operator of the learnable sparse regularizer simultaneously with a sparse solution. The convex minimization for the dictionaries and the coefficients can be obtained via the accelerated proximal gradient and the optimal condition, respectively. In the numerical classification and reconstruction experiments, the proposed algorithm outperformed existing DDL algorithms in terms of classification accuracy, image reconstruction, and noise immunity.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call