Abstract

We present a new visual control input from optical flow divergence enabling the design of novel, unified control laws for docking and landing. While divergence-based time-to-contact estimation is well understood, the use of divergence in visual control currently assumes knowledge of surface orientation, and/or egomotion. There exists no directly observable visual cue capable of supporting approaches to surfaces of arbitrary orientation under general motion. Central to our measure is the use of the maximum flow field divergence on the view sphere (max-div). We prove kinematic properties governing the location of max-div, and show that max-div provides a temporal measure of proximity. From this, we contribute novel control laws for regulating both approach velocity and angle of approach toward planar surfaces of arbitrary orientation, without structure-from-motion recovery. The strategy is tested in simulation, over real image sequences and in closed-loop control of docking/landing maneuvers on a mobile platform.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call