Abstract

To conveniently and precisely evaluate orthodox black tea appearance quality, we here present a novel method based on computer vision integrated image processing and deep learning. Tea images were collected using the custom-built image acquisition device and processed with Adaptive Local Tone Mapping (ALTM), and Unsharp Masking to optimize illumination and sharpness. These images were then used to train six convolutional neural networks (CNNs) to find suitable network structure by transfer learning. It was found that the CNNs constructed with the MBconv modules had better performance in this task. Consequently, a CNN classification model (Improved Inception Network) based on the MBconv modules was constructed and trained, which yielded a test accuracy of 95%, performed better than the other CNNs; the test accuracy of original Inception V3 was 89%. The proposed method obtained 97.22% accuracy for independent set in validation, which demonstrated the viability of applying image processing and deep learning approaches to solve practical problems in the field of tea assessment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call