Abstract

This article reports a study on a flexible neural network regression method within the functional analysis of variance framework that aims to adapt to the underlying structure of the target function. We develop a novel penalization scheme where a concept of node impurity is introduced in the neural network framework. The node impurity in neural networks represents the homogeneity of the effects of the inputs on the node. We first define the effect of individual input on node and in turn, measure the node impurity based on the effects of inputs on node. We adopt the sum of node impurities as a penalty function whose usage makes the connections from inputs to nodes sparse, which improves estimation accuracy by reducing unnecessary complexity and enables data-adaptive structure identification. Our method takes into account of a large parameter space of the networks ranging from a fully-connected structure to sparsely connected structures. Among possible node connection structures, an optimal model is selected based purely on observed data. Numerical studies based on simulated and real datasets show that the proposed method performs well in identifying the inherent structure of the regression function and produces good estimation accuracy. Supplementary materials for this article are available online.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call