Abstract

Modern AGVs are equipped with several safety laser scanners with a combined 360 deg field of view around the AGV to detect and subsequently avoid collisions with other AGVs, structural elements and, most importantly, workers. This contactless environment perception approach fulfils current safety legislation and safety regulations for driverless industrial trucks. However, obstacle detection is limited to a 2D plane parallel and close to the ground, unable to detect protruding or hanging objects in the path of the AGV. In order to avoid collisions with these kinds of objects as well, the idea of PAN-Robots is to enhance the existing 2D safety by a 3D perception system based on an omnidirectional stereo camera. This paper describes the multi-level on-board sensor data fusion strategies implemented in the PAN-Robots project. The fused information of tracked and classified objects is not only used for on-board risk assessment and emergency collision avoidance, but is also communicated to the global control center for advanced fleet coordination and intelligent AGV navigation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.