Abstract

This paper is concerned with the sparsification of the input-hidden weights of ELM (extreme learning machine). For ordinary feedforward neural networks, the sparsification is usually done by introducing certain regularization technique into the learning process of the network. However, this strategy cannot be applied for ELM, since the input-hidden weights of ELM are supposed to be randomly chosen rather than iteratively learned. To this end, we propose a modified ELM, called ELM-LC (ELM with local connections), which is designed for the sparsification of the input-hidden weights as follows: The hidden nodes and the input nodes are divided respectively into several corresponding groups, and each input node group is fully connected with its corresponding hidden node group, but is not connected with any other hidden node group. As in the usual ELM, the input-hidden weights are randomly given, and the hidden-output weights are obtained through a least square learning. In the numerical simulations on some benchmark problems, the new ELM-LC behaves better than the traditional ELM and the ELM with normal sparse input-hidden weights.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call