Abstract

A learning scheme based on Extreme Learning Machine (ELM) and L1/2 regularization is proposed for a double parallel feedforward neural network. ELM has been widely used as a fast learning method for feedforward networks with a single hidden layer. A key problem for ELM is the choice of the (minimum) number of the hidden nodes. To resolve this problem, we propose to combine the L1/2 regularization method, that becomes popular in recent years in informatics, with ELM. It is shown in our experiments that the involvement of the L1/2 regularizer in DPFNN with ELM results in less hidden nodes but equally good performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call