Abstract
Network training error and sparsity are two critical factors in optimizing the model parameters of existing neuro-evolution algorithms. Alleviating the curse of dimensionality in optimizing large-scale neuro-evolution algorithms is one of the most serious challenges. Furthermore, different from existing algorithms, compressibility is considered and combined with the sparsity as the joint sparsity-compressibility to indicate the network’s complexity in the paper. An objective-hierarchy-based neuro-evolutionary algorithm is proposed to comprehensively improve training error performance and joint sparsity-compressibility. First, a multi-directional neuro-evolution is constructed using a Tucker-based tensor decomposition to greatly compress the decision space and generate the subordinates of the decision variables. Afterwards, each subordinate is circularly updated by a multi-linear neuron-spired search in the compressed decision sub-space. Finally, the superior is constructed by concatenating all subordinates and updated by the sparse evolutionary scheme. Compared with other eight state-of-the-art algorithms on seven classification problems in the fields of image/video processing, industrial engineering, biomedical signal, and smart grid, the experimental results demonstrate the effectiveness of the proposed approach. Moreover, other outstanding large-scale neuro-evolutionary algorithms can be embedded into the tensor decomposition-based objective-hierarchy model of the proposed algorithm.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have