Abstract

6D pose estimation has been pervasively applied to various robotic applications, such as service robots, collaborative robots, and unmanned warehouses. However, accurate 6D pose estimation is still a challenge problem due to the complexity of application scenarios caused by illumination changes, occlusion and even truncation between objects, and additional refinement is required for accurate 6D object pose estimation in prior work. Aiming at the efficiency and accuracy of 6D object pose estimation in these complex scenes, this paper presents a novel end-to-end network, which effectively utilises the contextual information within a neighbourhood region of each pixel to estimate the 6D object pose from RGB-D images. Specifically, our network first applies the attention mechanism to extract effective pixel-wise dense multimodal features, which are then expanded to multi-scale dense features by integrating pixel-wise features at different scales for pose estimation. The proposed method is evaluated extensively on the LineMOD and YCB-Video datasets, and the experimental results show that the proposed method is superior to several state-of-the-art baselines in terms of average point distance and average closest point distance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call