Abstract

Indoor and outdoor accurate 3D mapping is a relevant resource for a diversity of applications. This paper describes an autonomous platform capable of generating 3D imagery of the environment in unknown indoor and outdoor contexts. The system is composed by a number of Data Fusion processes that are performed in real-time by on-board and/or off-board processing nodes. The platform's sensing capabilities are composed of multiple laser scanners for 2D and 3D perception, IMU units, 3D cameras (indoor Kinect), standard cameras, GPS (for outdoor operation) and dead reckoning sensors. The acquired data is shared with multiple client processes that are in charge of different levels of perception and control. The resulting data, produced by the perception processes is also shared for being used by higher-level processes such as the 3D mapping, generation of maps of diverse dense properties, detection and classifications of obstacles and other context features that are application specific.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.