Abstract

The extreme learning machine-based autoencoder (ELM-AE) has attracted a lot of attention due to its fast learning speed and promising representation capability. However, the existing ELM-AE algorithms only reconstruct the original input and generally ignore the probability distribution of the data. The minimum error entropy (MEE), as an optimal criterion considering the distribution statistics of the data, is robust in handling non-linear systems and non-Gaussian noises. The MEE is equivalent to the minimisation of the Kullback–Leibaler divergence. Inspired by these advantages, a novel randomised AE is proposed by adopting the MEE criterion as the loss function in the ELM-AE (in short, the MEE-RAE) in this study. Instead of solving the output weight by the Moore–Penrose generalised inverse, the optimal output weight is obtained by the fixed-point iteration method. Further, a quantised MEE (QMEE) is applied to reduce the computational complexity of. Simulations have shown that the QMEE-RAE not only achieves superior generalisation performance but is also more robust to non-Gaussian noises than the ELM-AE.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call