Abstract
We investigate UAV-IoT data capture and networking for remote scene virtual reality (VR) immersion. We characterize the delivered immersion fidelity as a function of the assigned UAV-IoT capture/network rates and study the optimization problem of maximizing it, for given system/application constraints. We explore fast reinforcement learning to discover the best dynamic UAV-IoT network placement over the scene of interest to maximize the expected remote immersion fidelity. We design scalable source-channel viewpoint coding to maximize the expected reconstruction fidelity of the data captured at every UAV location at the ground-based aggregation point. Finally, we explore layered directional networking and rate-distortion-power optimized embedded scheduling methods to effectively transmit the encoded data and overcome network transients that lead to packet buffering, which represent the fourth system component of our framework. Experimental results demonstrate considerable performance efficiency gains enabled by each system component over the respective state-of-the-art reference methods, in delivered VR immersion fidelity, application interactivity/play-out latency, and transmission power consumption.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.