Abstract

Without the need for an on-board pilot, drones are designed to accomplish dull, dangerous and dirty missions. However, if a mission exhibits a large operative area and/or several objectives, it may entail poor performance when executed by a single drone. Drone teams may overcome this issue by acting as mobile sensor networks for proximal sensing. In such networks, cooperative autonomy is a key enabling behaviour for achieving resilient and cost-efficient systems. This work implements cooperative autonomous behaviour in the form of a dynamic and decentralized mission planner for a multi-drone inspection mission. The proposed design exploits multi-agent task allocation, distributed route planning and game theory for the assignment of inspection tasks and for the processing of optimal routes in reasonable time frames and with limited communication. In detail, it applies the learning-in-games framework for the coordination within the inspection team, by studying some ad-hoc variants of best response and of log linear learning. Moreover, this work presents some numerical results of model-in-the-loop tests for a comparison between the learning-in-games approaches.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call