Abstract

Shrimp quality evaluations fulfill an essential role in producing high-value shrimp products. The presence of soft-shell shrimp deteriorates the quality of shrimp products. The biggest challenge in preventing this is the similarity in appearance of soft-shell (s-shrimp) and sound (o-shrimp) shrimp from an imaging perspective. This similarity imposes significant limitations on distinguishing them with traditional machine vision methods. To circumvent this problem, a novel method based on deep convolutional neural networks (Deep-ShrimpNet) is proposed. Initially, several image processing steps were performed to normalize the shrimp image. Furthermore, four critical hyper-parameters (i.e., batch-size, dropout ratio, learning rate and number (size) of local receptive fields) were optimized by a comparative analysis. Additionally, the self-learned combined features in each convolutional layer were visualized to explore the internal mechanism of Deep-ShrimpNet. To obtain the efficient strategy, an ablation study was also performed by removing layers of the CNNs. Finally, the superiority of the proposed algorithm was verified through a comparison with other sophisticated CNNs. In a test dataset, Deep-ShrimpNet achieved a mean accuracy precision (mAP) of 0.972 and modeling time of 0.54 h. The robust performance of the proposed method across the shrimp dataset indicates that Deep-ShrimpNet is promising for on-line shrimp classification and quality measurement.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call