Abstract

Purpose3D printing for objects whose size exceeds the scope of the printer is still a tough challenge in application. The purpose of this paper is to propose a visual stitching large-scale (VSLS) 3D-printing method to solve this problem.Design/methodology/approachThe single segmentation point method and multiple segmentation point method are proposed to adaptively divide each slice of the model into several segments. For each layer, the mobile robot will move to different positions to print each segment, and every time it arrives at the planned location, the contours of the printed segments are captured with a high-definition camera by the feature point recognition algorithm. Then, the coordinate transformation is implemented to adjust the printing codes of the next segment so that each part can be perfectly aligned. The authors print up layer by layer in this manner until the model is complete.FindingsIn Section 3, two specimens, whose sizes are 166 per cent and 252 per cent of the scope of the 3D-printing robot, are successfully printed. Meanwhile, the completed models of the specimens are printed using a suitable traditional printer for comparison. The result shows that the specimens in the test group have basically identical sizes to those in the control group, which verifies the feasibility of the VSLS method.Originality/valueUnlike most of the current solutions that demand harsh requirement for positioning accuracy of the mobile robots, the authors use a camera to compensate for the lost positioning accuracy of the device during movement, thereby avoiding precise control to the device’s location. And the coordinate transformation is implemented to adjust the printing codes of the next sub-models so that each part can be aligned perfectly.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call