Abstract

We propose a robust approach to detecting and tracking moving objects for a naval unmanned aircraft system (UAS) landing on an aircraft carrier. The frame difference algorithm follows a simple principle to achieve real-time tracking, whereas Faster Region-Convolutional Neural Network (R-CNN) performs highly precise detection and tracking characteristics. We thus combine Faster R-CNN with the frame difference method, which is demonstrated to exhibit robust and real-time detection and tracking performance. In our UAS landing experiments, two cameras placed on both sides of the runway are used to capture the moving UAS. When the UAS is captured, the joint algorithm uses frame difference to detect the moving target (UAS). As soon as the Faster R-CNN algorithm accurately detects the UAS, the detection priority is given to Faster R-CNN. In this manner, we also perform motion segmentation and object detection in the presence of changes in the environment, such as illumination variation or “walking persons”. By combining the 2 algorithms we can accurately detect and track objects with a tracking accuracy rate of up to 99% and a frame per second of up to 40 Hz. Thus, a solid foundation is laid for subsequent landing guidance.

Highlights

  • Unmanned aircraft systems (UAS) have become a major trend in robotics research in recent decades

  • To concretize the idea of motion smoothness, we model the center of Region of Interest (ROI) in the current frame equal to top left corner of the bounding box (x, y) in the previous frame as: ROI = f rame[y − kh : y + kh, x − kw : x + kw]

  • K is set to be 4 and we find it gives a high probability on keeping the ROI include the target during the whole landing process

Read more

Summary

Introduction

Unmanned aircraft systems (UAS) have become a major trend in robotics research in recent decades. UAS has emerged in an increasing number of applications, both military and civilian. The opportunities and challenges of this fast-growing field are summarized by Kumar et al [1]. Takeoff, and landing involve the most complex processes; in particular, autonomous landing in unknown or Global Navigation Satellite System (GNSS)-denied environments remains undetermined. With fusion and development of computer vision and image processing, the application of visual navigation in UAS automatic landing has widened. Computer vision in UAS landing has achieved a number of accomplishments in recent years. Difference detector detector contour center of target. For single-target tracking, we make the following improvements on the Faster R-CNN algorithm

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.