Abstract

Semantic segmentation plays a vital role in indoor scene analysis. Currently, its accuracy is still limited due to the complex conditions of various indoor scenes. In addition, it is difficult to complete this task solely relying on RGB images. Since depth images can provide additional 3D geometric information to RGB images, researchers chose to incorporate depth images for improving the accuracy of indoor semantic segmentation. However, it is still a challenge to effectively fuse the depth information with the RGB images. To address this issue, a three-stream coordinate attention network is proposed. The presented network reconstructs a multi-modal feature fusion module for RGB-D features, which can realize the aggregation of two modal information along the spatial and channel dimensions. Meanwhile, three convolutional neural network branches are used to construct a parallel three-stream structure, which can, respectively, process the RGB features, depth features and combined features. On one hand, the proposed network can preserve the original RGB and depth feature streams, simultaneously. On the other hand, it can also contribute to utilize and propagate the fusion feature flow better. The embedded ASPP module is used to optimize the semantic information in the proposed network, so as to aggregate the feature information of different scales and obtain more accurate features. Experimental results show that the proposed model can reach a state-of-the-art mIoU accuracy of 50.2% on the NYUDv2 dataset and on the more complex SUN-RGBD dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call