Abstract

In this paper, we analyze Difference of Convex Neural Networks in the context of one-dimensional nonlinear regression. Specifically, we show the surprising ability of the Difference of Convex Multilayer Perceptron (DC-MLP) to avoid overfitting in nonlinear regression. Otherwise said, DC-MLPs self-regularize (do not require additional regularization techniques). Thus, DC-MLPs could result very useful for practical purposes based on one-dimensional nonlinear regression. It turns out that shallow MLPs with a convex activation (ReLU, softplus, etc.) fall in the class of DC-MLPs. On the other hand, we call SQ-MLP the shallow MLP with a Squashing activation (logistic, hyperbolic tangent, etc.). In the numerical experiments, we show that DC-MLPs used for nonlinear regression avoid overfitting, in contrast with SQ-MLPs. We also compare DC-MLPs and SQ-MLPs from a theoretical point of view.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.