Abstract

We present a platform to foster research in active scene understanding, consisting of high-fidelity simulated environments and a simple yet powerful API that controls a mobile robot in simulation and reality. In contrast to static, pre-recorded datasets that focus on the perception aspect of scene understanding, agency is a top priority in our work. We provide three levels of robot agency, allowing users to control a robot at varying levels of difficulty and realism. While the most basic level provides pre-defined trajectories and ground-truth localisation, the more realistic levels allow us to evaluate integrated behaviours comprising perception, navigation, exploration and SLAM. In contrast to existing simulation environments, we focus on robust scene understanding research using our environment interface (BenchBot) that provides a simple API for seamless transition between the simulated environments and real robotic platforms. We believe this scaffolded design is an effective approach to bridge the gap between classical static datasets without any agency and the unique challenges of robotic evaluation in reality. Our BenchBot Environments for Active Robotics (BEAR) consist of 25 indoor environments under day and night lighting conditions, a total of 1443 objects to be identified and mapped, and ground-truth 3D bounding boxes for use in evaluation. BEAR website: https://qcr.github.io/dataset/benchbot-bear-data/.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.