Abstract

There has been significant progress in contour detection with the development of convolutional neural networks. To improve the detection performance of the object contour in the complex scenes, we proposed a novel lateral refinement network (LRNet) that extracts feature information from multiple refinement modules. In addition, we modified LRNet to train an effective contour detector, called lateral refinement contour (LRC). LRNet expands the generic refinement architecture and has different refinement levels that make the refinement network deeper to extract richer convolutional features. The low-level refinement modules explicitly exploit information available along the side output of the down-sampling process, and the high-level refinement modules attempt to fuse the low-level output features. The proposed method improves the performance of image-to-image predictions by deep-stacking all the meaningful refinement levels in a holistic manner. Using the pre-trained VGG16 network and Resnet network as backbones to train LRNet, we achieved state-of-the-art performance on several available datasets. The experimental results on the BSDS dataset are ODS = 0.816 and ODS = 0.820, and the experimental results on the NYUD-v2 dataset are ODS = 0.761 and ODS = 0.760. Especially, the results on BSDS500 surpassed the human-level performance under stricter criteria.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call