Abstract

During past decades, extreme learning machine has acquired a lot of popularity due to its fast training speed and easy-implementation. Though extreme learning machine has been proved valid when using an infinitely differentiable function like sigmoid as activation, existed extreme learning machine theory pays a little attention to consider non-differentiable function as activation. However, other non-differentiable activation function, rectifier linear unit (Relu) in particular, has been demonstrated to enable better training of deep neural networks, compared to previously wide-used sigmoid activation. And today Relu is the most popular choice for deep neural networks. Therefore in this note, we consider extreme learning machine that adopts non-smooth function as activation, proposing that a Relu activated single hidden layer feedforward neural network (SLFN) is capable of fitting given training data points with zero error under the condition that sufficient hidden neurons are provided at the hidden layer. The proof relies on a slightly different assumption from the original one but remains easy to satisfy. Besides, we also found that the squared fitting error function is monotonically non-increasing with respect to the number of hidden nodes, which in turn means a much wider SLFN owns much expressive capacity.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call