Abstract

We propose a generic view planning algorithm that adjusts postures of the sensor automatically for multi-view perceptions. Formal description of the perception task is brought forward, including objects’ prior information library, the perception status, and tasks’ completion status. The view planning system operates on the basis of the formal expression, therefore it is not restricted by specific tasks. We employ the features that distinguish all candidate classes, together with features that define the rough shape of each class, as representation of the perceived information about the objects. The existence of features in each candidate class, the description of features, and the location relationship between features are known before perception, and they are filled in the fixed-form prior information library. All status are updated when data is received at a new view. To pick out the view that maximizes the acquisition of effective information, candidate views are sorted by a weighted evaluation function based on the updated status. Experiments of view planning for 3D recognition and reconstruction tasks are conducted, and the result shows that our algorithm has a good performance on multiple tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call