Abstract

As an efficient training strategy for single hidden layer neural networks, extreme learning machine, and its variants have been widely used due to its fast learning speed and superior generalization performance. However, when used for quaternion signal processing, extreme learning machine cannot make full use of the cross-channel correlation and quaternion statistics and thus often provides only suboptimal solutions. To this end, in this paper, we extend the extreme learning machine to quaternion domain and then propose two augmented quaternion extreme learning machine models for quaternion signal processing. These two models incorporate the involutions of the network input and hidden nodes, respectively, and can fully capture the second-order statistics of quaternion signals. In order to overcome the possible overfitting problem, two corresponding regularized augmented algorithms are also derived with the help of the generalized HR calculus. The superiority of the proposed models is verified by the simulation results.

Highlights

  • In the past decade, extreme learning machine (ELM) [1]–[3] has become a popular learning strategy for single hidden layer neural network with proved universal approximation capability

  • The advantages of ELM in efficiency and generalization performance over traditional learning algorithms for feedforward neural networks have been demonstrated on a wide range of problems from many fields [6]–[8], such as time series prediction, computer vision, medical or biomedical data analysis, and system modeling

  • SIMULATION RESULTS AND DISCUSSION To illustrate the advantages offered by the quaternion ELM (QELM) models over their real-valued counterparts in dealing with 3-D and 4-D signals, we performed the simulations on two types of problems: the first type is one step ahead prediction problem for synthetic 3-D and 4-D signals and the second type is a real world color face recognition problem

Read more

Summary

Introduction

Extreme learning machine (ELM) [1]–[3] has become a popular learning strategy for single hidden layer neural network with proved universal approximation capability. The principle behind ELM is that the weights between the input layer and hidden layer are randomly given while the weights between the hidden layer and output layer are determined by least squares optimization. This approach avoids the iterative computing process of the traditional gradient-based training algorithms [4], [5] and leads to excellent learning speed. The advantages of ELM in efficiency and generalization performance over traditional learning algorithms for feedforward neural networks have been demonstrated on a wide range of problems from many fields [6]–[8], such as time series prediction, computer vision, medical or biomedical data analysis, and system modeling.

Objectives
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.