Abstract

We are now developing the multi-purpose pork deboning robot system which consists of multi robot arms. In this system, in order to slit optimally and archive higher yield, it is required to detect the pubic bone and the tailbone exposed on the surface of the pork ham. However, it is difficult to recognize these bone surfaces by the image processing because the form of a ham changes with progression of the cut process and also some of pieces of the meat are covered over exposed bones. In this study, we developed the new technique to detect exposed bone regions and feature points by combining 3D image processing and deep learning (semantic pixel-wise segmentation). As a result, a fat area, a pubic bone area, and a tail bone area can be inferred by the originally pre-trained model using SegNet, and feature points can be detected independent of surrounding foreign objects.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.