Abstract

In recent years, a wide range of computer vision applications have relied upon superpixel. In an effort to generate superpixel segmentation for RGB-D images, we present a new efficient framework which combines color and spatial features and makes use of depth information as far as possible. It is performed by defining a measurement for the point cloud computed from depth map and distance between vertex normal. We use the distance of voxels to distinguish objects on depth map and use normal map to separate planes in the object. In this way, our method is able to generate superpixels both edge compact and plane fitting. Then we compare our proposed method with six state-of-the-art superpixel algorithms by considering their ability to adhere to image boundaries. The comparisons demonstrate that the performance of our method based on linear iterative clustering (SLIC) algorithm is superior to the most recent superpixel methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.