Abstract

Smooth hinging hyperplane (SHH) has been proposed as an improvement over the well-known hinging hyperplane (HH) by the fact that it retains the useful features of HH while overcoming HH's drawback of nondifferentiability. This paper introduces a formal characterization of smooth hinge function (SHF), which can be used to generate SHH as a neural network. A method for the general construction of SHF is also given. Furthermore, the work proves that SHH is better than HH in functional approximation, i.e., the optimal error of SHH approximating a general function is always smaller or equal to that of HH. Particularly, in the case that the SHF is generated via the integration of a class of sigmoidal functions, it is further proven that the corresponding SHH of the 2m SHFs would outperform a neural network with m of the sigmoidal function from which the SHF is derived. Any upper bound established on the approximation error of a neural network of m sigmoidal activation functions can hence be translated to the SHH of m SHFs by replacing m with [m/2]. The work also includes an algorithm for the identification of SHH making use of its differentiability property. Simulation experiments are presented to validate the theoretical conclusions to possible extent.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call