Abstract

Extracting meaningful information on objects varying scale and shape is a challenging task while obtaining distinctive features on small to large size objects to enhance overall object segmentation accuracy from 3D point cloud. To handle this challenge, we propose an attention-based multi-scale atrous convolutional neural network (AMSASeg) for object segmentation from 3D point cloud. Specifically, a backbone network consists of three modules: distinctive atrous spatial pyramid pooling (DASPP), FireModule, and FireDeconv. The DASPP utilizes average pooling operations and atrous convolutions with different sizes to aggregate distinctive information on objects at multiple scales. The FireModule and FireDeconv are responsible to efficiently extract general features. Meanwhile, a spatial attention module (SAM) and channel attention module (CAM) aggregate spatial and semantic information on smaller objects from low-level and high-level layers, respectively. Our network enables to encode multi-scale information and extract distinct feature on overall objects to enhance segmentation performance. We evaluate our method on KITTI dataset. Experimental results demonstrate that the proposed network is effective to improve segmentation performance on small to large objects at real-time speed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call