Abstract

With the rapid development of robot technology, meal delivery robot has become a research hotspot at home and abroad. Based on the fusion of Lidar and machine vision, a meal delivery robot was designed in this paper, which integrated the information of Lidar and machine vision in time and space. Specifically, because of the coordination of Lidar and machine vision in working frequency, their data can be unified in time. And through the equivalent conversion of radar coordinate system, world coordinate system, image coordinate system and pixel coordinate system, some relevant data can be unified in space. The food delivery robot in this article uses the ROS system, and uses the server for data processing and data transfer. Integrating lidar and machine vision to provide the robot with accurate environmental perception, which is the innovation of this article. The experimental results show that the fusion of multi-sensor data can effectively reduce the coordinate cumulative errors and the probability of “suspended animation” in the operation of the robot, and improve the meal delivery efficiency of the robot to a great extent. In addition, the robot has the functions of accurate obstacle avoidance, path planning and autonomous navigation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.