Abstract

We present an end-to-end method for active object classification in cluttered scenes from RGB-D data. Our algorithms predict the quality of future viewpoints in the form of entropy using both class and pose. Occlusions are explicitly modeled in predicting the visible regions of objects, which modulates the corresponding discriminatory value of a given view. We implement a one-step greedy planner and demonstrate our method online using a mobile robot. We also analyze the performance of our method compared to similar strategies in simulated execution using the Willow Garage dataset. Results show that our active method usefully reduces the number of views required to accurately classify objects in clutter as compared to traditional passive perception.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call