Abstract

Digital transformation is an information technology (IT) process that integrates digital information with operating processes. Its introduction to the workplace can promote the development of progressively efficient manufacturing processes, accelerating competition in terms of speed and production capacity. Equipment combined with computer vision has begun to replace manpower in certain industries including manufacturing. However, current object detection methods are unable to identify the actual rotation angle of a specific grasped target while objects are piled. Hence this study proposes a framework based on deep learning that integrates two object detection models. Faster R-CNN (region based convolutional neural network) is utilized to search for the direction reference point of the target, and Mask R-CNN is adopted to obtain the segmentation that not only forms the basis of an area filter but also generates a rotated bounding box by minAreaRect function. After integrating the output from two models, the location and actual rotated angle of target can be obtained. The purpose of this research is to provide the robot arm with the position and angle information of the object located on the top for grasping. An empirical dataset of piled footwear insoles was employed to test the proposed method during the assembly process. Results show that the accuracy of the detection reached 96.26%. The implementation of proposed method in the manufacturing process not only can save man power who responsible for sorting out products but also reduce process time to enlarge production capacity. The proposed method can serve as a part of smart manufacturing system to enhance the enterprise’s competitiveness in the future.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call