Abstract

Convolutional neural networks (CNNs) handle the case where filters extend beyond the image boundary using several heuristics, such as zero, repeat or mean padding. These schemes are applied in an ad-hoc fashion and, being weakly related to the image content and oblivious of the target task, result in low output quality at the boundary. In this paper, we propose a simple and effective improvement that learns the boundary handling itself. At training-time, the network is provided with a separate set of explicit boundary filters. At testing-time, we use these filters which have learned to extrapolate features at the boundary in an optimal way for the specific task. Our extensive evaluation, over a wide range of architectural changes (variations of layers, feature channels, or both), shows how the explicit filters result in improved boundary handling. Furthermore, we investigate the efficacy of variations of such boundary filters with respect to convergence speed and accuracy. Finally, we demonstrate an improvement of 5–20% across the board of typical CNN applications (colorization, de-Bayering, optical flow, disparity estimation, and super-resolution). Supplementary material and code can be downloaded from the project page: http://geometry.cs.ucl.ac.uk/projects/2019/investigating-edge/.

Highlights

  • When performing convolutions on a finite domain, boundary rules are required as the kernel’s support extends beyond the edge

  • Addressing the boundary challenge, and making use of a convolutional neural networks (CNNs)’s extrapolating power, we propose the use of a novel explicit boundary rule in CNNs

  • We further investigate the benefit of boundary filters by introducing different implementation strategies for the additional filters

Read more

Summary

Introduction

When performing convolutions on a finite domain, boundary rules are required as the kernel’s support extends beyond the edge. For convolutional neural networks (CNNs), many discrete filter kernels “slide” over a 2D image and typically boundary rules including zero, reflect, mean, clamp are used to extrapolate values outside the image. Considering a simple detection filter (Fig. 1a) applied to a diagonal feature (Fig. 1b), we see that no boundary rule is ever ideal: zero will create a black boundary halo (Fig. 1c), using the mean color will reduce but not remove the issue (Fig. 1d), reflect and clamp (Fig. 1e, f) will create different kinks in a diagonal edge where the groundtruth continuation would be straight. These will manifest as false positive and negative images.

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.