Abstract

Deep neural networks (DNNs) are shown to be excellent solutions to staggering and sophisticated problems in machine learning. A key reason for their success is due to the strong expressive power of function representation. For piecewise linear neural networks (PLNNs), the number of linear regions is a natural measure of their expressive power since it characterizes the number of linear pieces available to model complex patterns. In this article, we theoretically analyze the expressive power of PLNNs by counting and bounding the number of linear regions. We first refine the existing upper and lower bounds on the number of linear regions of PLNNs with rectified linear units (ReLU PLNNs). Next, we extend the analysis to PLNNs with general piecewise linear (PWL) activation functions and derive the exact maximum number of linear regions of single-layer PLNNs. Moreover, the upper and lower bounds on the number of linear regions of multilayer PLNNs are obtained, both of which scale polynomially with the number of neurons at each layer and pieces of PWL activation function but exponentially with the number of layers. This key property enables deep PLNNs with complex activation functions to outperform their shallow counterparts when computing highly complex and structured functions, which, to some extent, explains the performance improvement of deep PLNNs in classification and function fitting.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.