Abstract
AbstractRGB‐D salient object detection (SOD) is to detect salient objects from one RGB image and its depth data. Although related networks have achieved appreciable performance, they are not ideal for mobile devices since they are cumbersome and time‐consuming. The existing lightweight networks for RGB‐D SOD use depth information as additional input, and integrate depth information with colour image, which achieve impressive performance. However, the quality of depth information is uneven and the acquisition cost is high. To solve this issue, depth‐aware strategy is first combined to propose a lightweight SOD model, Depth‐Aware Lightweight network (DAL), using only RGB maps as input, which is applied to mobile devices. The DAL's framework is composed of multi‐level feature extraction branch, specially designed channel fusion module (CF) to perceive the depth information, and multi‐modal fusion module (MMF) to fuse the information of multi‐modal feature maps. The proposed DAL is evaluated on five datasets and it is compared with 14 models. Experimental results demonstrate that the proposed DAL outperforms the state‐of‐the‐art lightweight networks. The proposed DAL has only 5.6 M parameters and inference speed of 39 ms. Compared with the best‐performing lightweight method, the proposed DAL has fewer parameters, faster inference speed, and higher accuracy.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.