Abstract

Additive manufacturing has significant advantages in complex parts of the vehicle manufacturing. As additive manufacturing is a kind of precise production activity, different components of manufacturing instruments need to be located in appropriate positions to ensure accuracy. The visual Simultaneous Localization and Mapping (SLAM) can be considered to be a practical means for this purpose. Considering dynamic characteristics of additive manufacturing scenarios, this paper constructs a deep learning-enhanced robust SLAM approach for production monitoring of additive manufacturing. The proposed method combines the semantic segmentation technique with the motion-consistency detection algorithm together. Firstly, the Transformer-based backbone network is used to segment the images to establish the a prior semantic information of dynamic objects. Next, the feature points of dynamic objects are projected by the motion-consistency detection algorithm. Then, the static feature points are adopted for feature matching and position estimation. In addition, we conducted a couple of experiments to test function of the proposed method. The obtained results show that the proposal can have excellent performance to promote realistic additive manufacturing process. As for numerical results, the proposal can improve image segmentation effect about 10% to 15% in terms of scenarios of visual SLAM-based additive manufacturing.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call