Abstract

In this paper, a multilevel architecture able to interface an on-board computer with a generic UAV flight controller and its radio receiver is proposed. The computer board exploits the same standard communication protocol of UAV flight controllers and can easily access additional data, such as: (i) inertial sensor measurements coming from a multi-sensor board; (ii) global navigation satellite system (GNSS) coordinates; (iii) streaming video from one or more cameras; and (iv) operator commands from the remote control. In specific operating scenarios, the proposed platform is able to act as a “cyber pilot” which replaces the role of a human UAV operator, thus simplifying the development of complex tasks such as those based on computer vision and artificial intelligence (AI) algorithms which are typically employed in autonomous flight operations.

Highlights

  • In the scientific community, one of the topics of major interest in the UAV field concerns autonomous navigation based on computer vision [1,2]

  • The difficulty of such a mission is due to the availability and accuracy of data from the traditional sensors (i.e., global navigation satellite system (GNSS), inertial measurement unit (IMU), compass and barometer) in small indoor environments

  • GNSS is just not available, the compass is negatively affected by the building structure, the barometer has insufficient precision in small places, and the IMU alone is, in general, not well suited for navigation because of the well-known problem of drifting

Read more

Summary

Introduction

One of the topics of major interest in the UAV field concerns autonomous navigation based on computer vision [1,2]. Given the high precision that these systems are able to achieve, they are often used to compare different control techniques. This approach allows for appreciating even small differences in the performance of different control algorithms. The progress made in terms of research development regarding mocap systems is not reflected in the industrial sector, since most of the navigation algorithms and control techniques that exploit mocap technology (which is ground assisted) can hardly be extensively implemented on the typical environments of this field. Thanks to the technological evolution and the proliferation of open source boards such as Raspberry Pi and Nvidia Jetson (running operating systems such as Linux), the research community has become increasingly interested in on-board vision-based navigation (VBN) techniques [9,10,11,12]

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.