Abstract
In recent years, there has been increasing interest in robotic solutions to revolutionize the conventional construction industry. Despite various advances in developing mobile robotic solutions for construction automation. One key bottleneck towards a fully automated robotic solution in construction is the initialization of the mobile robot. Currently, most of the commercialized mobile construction robots are manually initialized before autonomous navigation can be performed at the construction sites for automated tasks. Even if the robot is initialized, the location information can be lost while navigating and re-initialization is required to resume the navigation. Any wrong initialization can cause failure in robot pose tracking and thus prevent the robot from performing the planned tasks. However, in indoor construction sites, GPS is not accessible, and indoor infrastructures, such as beacon devices are not available for robot initialization. In addition, construction environments are dynamic with significant change in scenes and structures for different construction blocks and floors, making pre-scanning of the environments and map matching difficult and time-consuming. An infrastructure-free and environment-independent robot initialization method is therefore required. In this paper, we propose an integrated Building Information Model (BIM)-based indoor robot initialization system using an object detector to automatically initialize the mobile robot when it is deployed at an unknown location. Convolutional neural network (CNN)-based object detection technique is used to detect and locate the visual features, which are widely distributed building components at construction sites. A feature matching algorithm is developed to correlate the acquired online information of detected features with geometric and semantic information retrieved from BIM. The robot location in the BIM coordinate frame is then estimated based on the feature association. Moreover, the proposed system aggregates the BIM information and the sensory information to supervise the online robot decision making, making the entire system fully automatic. The proposed system is validated through experiments in various environments including a university building and ongoing construction sites.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.