Abstract

Visual sensing of the environment is crucial for flying an unmanned aerial vehicle (UAV) and is a centerpiece of many related applications. The ability to run computer vision and machine learning algorithms onboard an unmanned aerial system (UAS) is becoming more of a necessity in an effort to alleviate the communication burden of high-resolution video streaming, to provide flying aids, such as obstacle avoidance and automated landing, and to create autonomous machines. Thus, there is a growing interest on the part of many researchers in developing and validating solutions that are suitable for deployment on a UAV system by following the general trend of edge processing and airborne computing, which transforms UAVs from moving sensors into intelligent nodes that are capable of local processing. In this paper, we present, in a rigorous way, the design and implementation of a kg UAV system equipped with the necessary computational power and sensors to serve as a testbed for image processing and machine learning applications, explain the rationale behind our decisions, highlight selected implementation details, and showcase the usefulness of our system by providing an example of how a sample computer vision application can be deployed on our platform.

Highlights

  • unmanned aerial vehicle (UAV) can either be operated as remotely piloted vehicles or can be pre-programmed to conduct their flight without intervention

  • We present our UAV platform, which has so far been used in one EUfunded research project, ANDROMEDA [22], which focused on cross-border collaboration and information exchange, and it is being utilized in another project, EFFECTOR [23], which develops an interoperability framework for maritime surveillance

  • The WiFi utilizes a set of antennas placed on each side of the UAV with a 45-degree inclination, while the disc-shaped objects on the back of the UAV’s top side are the global navigation satellite system (GNSS) antennas

Read more

Summary

Introduction

UAVs can either be operated as remotely piloted vehicles or can be pre-programmed to conduct their flight without intervention. Most commercial off-the-shelf (COTS) UAVs almost always include a camera for manual control; there is, an increasing degree of autonomous operation capability in the form of return-to-home functions, predefined flight plan execution, etc. These are augmented with an ever more sophisticated obstacle avoidance functionality that exploits various sensors, such as sonars, lidars, and cameras. DJI’s high-end UAV, Matrice 300 RTK [1], carries 13 cameras, one for remote piloting and 12 for obstacle avoidance, without taking into account any more that are added as a payload. Applications that combine computer vision, machine learning, and UAVs, such as orchard management or parcel delivery, seem plausible thanks to advances in related algorithms [2,3]

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call