Abstract

Unmanned aerial vehicles (UAVs) are supposed to be used to provide different services from video surveillance to communication facilities during critical and high-demanding scenarios. Augmented reality streaming services are especially demanding in terms of required throughput, computing resources at the user device, as well as user data collection for advanced applications, for example, location-based or interactive ones. This work is focused on the experimental utilization of a framework adopting reinforcement learning (RL) approaches to define the paths crossed by UAVs in delivering resources for augmented reality services. We develop an OpenAI Gym-based simulator that is tuned and tested to study the behavior of UAVs trained with RL to fly around a given area and serve augmented reality users. We provide abstractions for the environment, the UAVs, the users, and their requests. A reward function is then defined to encompass several quality-of-experience parameters. We train our agents and observe how they behave as a function of the number of UAVs and users at different hours of the day.

Highlights

  • Latest advancements in science and communication unlock the opportunity to employ in everyday life new and exciting technologies that will improve the experience of the users in surprising and innovative ways

  • It is possible to see the metrics regarding the cases with more than 5 users: it is clear that a single Unmanned aerial vehicles (UAVs) cannot serve all the users, but it can provide great support to other sources of connection and services such as a base station or other UAVs

  • We presented a new framework to simulate multi-UAV service providers, including a simulator and a reinforcement learning environment

Read more

Summary

INTRODUCTION

Latest advancements in science and communication unlock the opportunity to employ in everyday life new and exciting technologies that will improve the experience of the users in surprising and innovative ways. New standards have been released Lafruit et al (2019) with the purpose of 3D static and dynamic visual content compression for immersive reality multimedia services The increasing popularity of these high-tech facilities will indubitably increase the network traffic and the users’ needs for huge amounts of both exchanged data and computational capacity. The support of such services is still very challenging, especially in rural environments or where high bandwidth interconnections are unavailable. Resources can be delivered to users without the need for modifying the preexisting infrastructure, making these new services feasible, sustainable, and accessible

Our Goal
Reinforcement Learning Advantages for UAV Applications
RELATED WORK
OPERATIONAL SCENARIO
Space and Time
Model of User Requests
10 From 1 to 3
REINFORCEMENT LEARNING ENVIRONMENT
Agents Model
Simulation Framework
Training
EXPERIMENTAL SETTINGS
Metrics Employed
EXPERIMENTAL RESULTS
First Scenario
Second Scenario
Third Scenario
Comparative Baseline
CONCLUSION AND FUTURE WORK
DATA AVAILABILITY STATEMENT
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call