Abstract
Environmental perception is a necessary prerequisite for intelligent robots to perform specified tasks, and is the basis for subsequent control and decision-making. In recent years, with the rapid development of deep learning technology and the dramatic improvement of hardware performance, vision-based environmental perception technologies, such as target recognition and target detection, have made significant progress. However, most vision algorithms are developed based on images with stable lighting conditions and no significant disturbances. In fact, robots often need to operate in unstructured, complex conditions or visually degraded environments. Visual perception alone cannot meet the job requirements and it lacks the ability to adapt to the environment. Therefore, the environment perception technology based on multi-sensor fusion has become a popular research direction. In this paper, we first analyze the characteristics of sensors required for perception, and briefly review the uni-modal sensor application status in complex environments such as mines, railways, highways, tunnels, etc. Secondly, we introduce the datasets and sensor fusion methods for robotics perception. Thirdly, we provide an overview of the multi-modal perception technology applied on intelligent robot. Finally, we summarize the challenges and future development trends in this direction.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.