Abstract
A recent research interest on deep neural networks is to understand why deep networks are preferred to shallow networks. In this article, we considered an advantage of a deep structure in realizing a heaviside function in training. This is significant not only as simple classification problems but also as a basis in constructing general non-smooth functions. A heaviside function can be well approximated by a difference of ReLUs if we can set extremely large weight values. However, it is not so easy to attain them in training. We showed that a heaviside function can be well represented without large weight values if we employ a deep structure. We also showed that update terms of weights at input side can be necessarily large if a network is trained to realize a heaviside function. Therefore, apparent acceleration of training is brought about by setting a small learning rate. As a result, we can say that, by employing a deep structure, a good fitting of heaviside function can be obtained within a reasonable training time under a moderate small learning rate. Our results suggest that a deep structure is effective in a practical training that requires a discontinuous output.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.