Abstract

Feature fusion approaches have been widely used in object detection and semantic segmentation to improve accuracy. Global feature fusion integrates semantic information and detail spatial information. Combining the fine feature maps in the bottom-up stage and the coarse feature maps in the top-down stage is very effective in the network where it is necessary to understand the contextual information of a given image. In this paper, we propose a method to integrate multiple feature maps in the local region as well as global feature fusion. Local multi-scale feature fusion integrates neighboring feature maps from different levels and scales to get a more diverse range of receptive fields with less computation while keeping detail appearance information. Experimental results demonstrate that the proposed network, which is based on the global and local feature fusion, achieves competitive accuracy with real-time inference speed in semantic segmentation and object detection tasks over the previous state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call