Abstract

In leather images, precise pore segmentation is a necessary medium for accurate species prediction. However, due to foreground color, texture, size, and boundary variability, traditional methods encounter false segmentation. This paper proposes a deep learning-based novel auto-pore segmentation network (ApSnet). It aims to automatically segment the species-definite pores from novel digital microscopic leather image data. The ApSnet learns the pore regions from raw and the corresponding ground-truth images. The images undergo patch-based image augmentation to fasten the learning process. The depth and the concatenations of the existing Unet are reduced to design the simple and shallow ApSnet. The weighted average loss function strengthens the image understanding to segment the accurate pore pixels. The experimental analysis affirms the superior performance of the proposed ApSnet against Unet, Unet++, and I-Unet. The study also validates ApSnet with KDSB18 and KVasir-SEG datasets for medical image segmentation. It also tests the robustness of ApSnet on the leather images with fuzzy textures. Finally, the k-nearest neighbor (KNN) model learns and classifies the features from the ApSnet segmented images with 97.4% accuracy. Thus, the analysis proves that precise pore segmentation is indispensable in interpreting species-distinct pore features. Therefore, the present work develops a human–computer interactive platform for accurate leather species prediction by designing a computationally efficient ApSnet-based segmentation model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call