Abstract
Due to the continuous improvement of deep learning, saliency object detection based on deep learning has been a hot topic in computational vision. The Fully Convolutional Neural Network (FCNS) has become the mainstream method in salient target measurement. In this article, we propose a new end-to-end multi-level feature fusion module(MCFB), success-fully achieving the goal of extracting rich multi-scale global information by integrating semantic and detailed information. In our module, we obtain different levels of feature maps through convolution, and then cascade the different levels of feature maps, fully considering our global information, and get a rough saliency image. We also propose an optimization module upon our base module to further optimize the feature map. To obtain a clearer boundary, we use a self-defined loss function to optimize the learning process, which includes the Intersection-over-Union (IoU) losses, Binary Cross-Entropy (BCE), and Structural Similarity (SSIM). The module can extract global information to a greater extent while obtaining clearer boundaries. Compared with some existing representative methods, this method has achieved good results.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.