Abstract
Online kernel-based dictionary learning (DL) algorithms are considered, which perform DL on training data lifted to a high-dimensional feature space via a nonlinear mapping. Compared to batch versions, online algorithms require low computational complexity, essential for processing the Big Data, based on the stochastic gradient descent method. However, as with any kernel-based learning algorithms, the number of parameters needed to represent the desired dictionary is equal to the number of training samples, which incurs prohibitive memory requirement and computational complexity for large-scale datasets. In this work, appropriate sparsification and pruning strategies are combined with online kernel DL to mitigate this issue. Numerical tests verify the efficacy of the proposed strategies.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.