Abstract

Research in autonomous driving has benefited from a number of visual datasets collected from mobile platforms, leading to improved visual perception, greater scene understanding, and ultimately higher intelligence. However, this set of existing data collectively represents only highly structured, urban environments. Operation in unstructured environments, e.g., humanitarian assistance and disaster relief or off-road navigation, bears little resemblance to these existing data. To address this gap, we introduce the Robot Unstructured Ground Driving (RUGD) dataset with video sequences captured from a small, unmanned mobile robot traversing in unstructured environments. Most notably, this data differs from existing autonomous driving benchmark data in that it contains significantly more terrain types, irregular class boundaries, minimal structured markings, and presents challenging visual properties often experienced in off road navigation, e.g., blurred frames. Over 7, 000 frames of pixel-wise annotation are included with this dataset, and we perform an initial benchmark using state-of-the-art semantic segmentation architectures to demonstrate the unique challenges this data introduces as it relates to navigation tasks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.