Abstract

Although deep convolution networks have been successfully applied to many applications, their working mechanisms have been barely understood. In this paper, we investigate the decision boundary formation of convolution neural networks that use the ReLU function. Due to the non-linearity of ReLU, the convolution neural networks can be viewed as a non-linear transformation. However, the decision boundary in the original input space will be always locally linear, which may produce undesirable decision boundaries. We also present some examples that show interesting properties of decision boundary of convolution neural networks with ReLU.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call