Abstract

AbstractIn the process of autonomous landing of unmanned aerial vehicles (UAV), the vision sensor is restricted by the field of view and UAV maneuvering process, which may make the acquired relative position/attitude parameters unstable or even odd (not unique), and there is a ‘blind area’ of vision measurement in the UAV rollout stage, which loses the navigation ability and seriously affects the safety of landing. In this paper, an autonomous landing navigation method based on inertial/visual sensor information fusion is proposed. When the UAV is far away from the airport and the runway imaging is complete, landing navigation parameters are determined by vision sensor based on the object image conjugate relationship of the runway sideline, and fuses with the inertial information to improve the measure performance. When the UAV is close to the airport and the runway imaging is incomplete, the measurement information of the vision sensor appears singular. The estimation of the landing navigation parameters is realized by inertial information in the aid of vision. When the UAV rollouts, the vision sensor enters the ‘blind area’, judges the UAV’s motion state through the imaging features of two adjacent frames, and suppresses the inertial sensor error by using the UAV’s motion state constraint, so as to achieve the high-precision maintenance of landing navigation parameters. The flight test shows that the lateral relative position error is less than 10m when the inertial with low accuracy and visual sensor are used, which can meet the requirement of UAV landing safely.KeywordsAutonomous landing navigationDeep learning semantic segmentationInertial/Vision data fusion

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call