Abstract
Localization and navigation play a key role in many location-based services and have attracted numerous research efforts. In recent years, visual SLAM has been prevailing for autonomous driving. However, the ever-growing computation resources demanded by SLAM impede its applications to resource-constrained mobile devices. In this paper, we present the design, implementation, and evaluation of <i>edgeSLAM</i>, an edge-assisted real-time semantic visual SLAM service running on mobile devices. <i>edgeSLAM</i> leverages the state-of-the-art semantic segmentation algorithm to enhance localization and mapping accuracy, and speeds up the computation-intensive SLAM and semantic segmentation algorithms by computation offloading. The key innovations of <i>edgeSLAM</i> include an efficient computation offloading strategy, an opportunistic data sharing method, an adaptive task scheduling algorithm, and a multi-user support mechanism. We fully implement <i>edgeSLAM</i> and plan to open-source it. Extensive experiments are conducted under 3 datasets. The results show that <i>edgeSLAM</i> can run on mobile devices at 35fps and achieve 5cm localization accuracy from real-world experiments, outperforming existing solutions by more than 15%. We also demonstrate the usability of <i>edgeSLAM</i> through 2 case studies of pedestrian localization and robot navigation. To the best of our knowledge, <i>edgeSLAM</i> is the first edge-assisted real-time semantic visual SLAM for mobile devices.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.