Abstract

For full process automation in Industry 4.0 with smart machines, localizing objects is a crucial capability. Enabling such an object localization with Machine Learning is tempting, as it avoids a manual engineering of features for every object class, yet gathering large amounts of accurate pose data to train neural networks can be just as difficult. In this work, a novel self-supervised approach based on point clouds is presented, which resolves these issues and allows a robust detection and localization of objects in cluttered scenes, while compensating for noise, occlusions and symmetries. It leverages 3D sensor data simulation being simpler than light rendering to construct random scenes using a stochastic process by maintaining object relations with placement operations like cloning, stacking and storing, and to then train a fully convolutional voting network with random scans from those scenes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call