Abstract

This letter proposes new prediction models for Visual Predictive Control that can lead to both better motions in the feature space and shorter sensor trajectories in 3D. Contrarily to existing first-order models based only on the interaction matrix, it is proposed to integrate acceleration information provided by second-order models. This allows to better estimate the evolution of the image features, and consequently to evaluate control inputs that can properly steer the system to a desired configuration. By means of simulations, the performances of these new predictors are shown and compared to those of a classical model. Included experiments using both image point features and polar coordinates confirm the validity and generality of the approach, showing that the increased complexity of the predictors does not prevent real-time implementations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call