Abstract

In this paper, the efficient hinging hyperplanes (EHH) neural network is proposed, which is basically a single hidden layer neural network. Different from the dominant single hidden layer neural networks, the hidden layer in the EHH neural network can be seen as a directed acyclic graph (DAG) and all the nodes in the DAG contribute to the output. It is proved that for every EHH neural network, there is an equivalent adaptive hinging hyperplanes (AHH) tree, the model of which was proposed based on the hinging hyperplanes (HH) model and finds good applications in system identification. Analog to the proof for the AHH model, the universal approximation ability of the EHH neural network is provided. Different from other neural networks, the EHH neural network has interpretability, which can be easily obtained through its ANOVA decomposition (or interaction matrix). The interpretability can then be used as an indication for the importance of the input variables. The construction of the EHH neural network includes initial network generation and parameter optimization (including the structure and weights parameter optimization). A descent algorithm for searching the locally optimal EHH neural network is proposed and the worst-case complexity of the algorithm is also provided. The EHH neural network is applied in nonlinear system identification, the simulation results show that satisfactory accuracy can be achieved with relatively low computational cost, and at the same time, some insights into the importance of the regressors and the interactions among the regressors can be revealed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call