Abstract
The excessive pursuit of accuracy has resulted in complex and huge structures of most existing salient object detection (SOD) models, and the meticulously designed lightweight SOD models cannot accurately detect salient objects. To improve the practicality of SOD, we design a novel position prior attention network (PPANet) for fast and accurate salient object detection in this paper. In detail, we propose a position prior attention module (PPAM), which first assigns different weights to positions based on the prior that objects near the image center are more attractive to people, and then perceives object context information through different receptive fields. In addition, we propose a context fusion module (CFM) to prevent the coarse resolution of high-level features from diluting the salient object boundaries during fusion. We present two PPANet versions: a heavyweight PPANet-R aimed at high accuracy SOD and a lightweight PPANet-M that achieves a good balance between accuracy and efficiency. Besides, we construct a structural polishing loss that gives more attention to object boundary and solves the problem of sample imbalance. Experimental results on 5 popular benchmark datasets demonstrate that the proposed PPANet-R outperforms existing SOD models, and PPANet-M achieves accuracy comparable to the state-of-the-art heavyweight SOD methods with 150 FPS real-time detection speed.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.