Abstract
The food industry continuously prioritizes methods and technologies to ensure product quality and safety. Traditional approaches, which rely on conventional algorithms that utilize predefined features, have exhibited limitations in representing the intricate characteristics of food items. Recently, a significant shift has emerged with the introduction of convolutional neural networks (CNNs). These networks have emerged as powerful and versatile tools for feature extraction, standing out as a preferred choice in the field of deep learning. The main objective of this study is to evaluate the effectiveness of convolutional neural networks (CNNs) when applied to the classification of chicken meat products by comparing different image preprocessing approaches. This study was carried out in three phases. In the first phase, the original images were used without applying traditional filters or color modifications, processing them solely with a CNN. In the second phase, color filters were applied to help separate the images based on their chromatic characteristics, while still using a CNN for processing. Finally, in the third phase, additional filters, such as Histogram of Oriented Gradients (HOG), Local Binary Pattern (LBP), and saliency, were incorporated to extract complementary features from the images, without discontinuing the use of a CNN for processing. Experimental images, sourced from the Pygsa Group databases, underwent preprocessing using these filters before being input into a CNN-based classification architecture. The results show that the developed models outperformed conventional methods, significantly improving the ability to differentiate between chicken meat types, such as yellow wing, white wing, yellow thigh, and white thigh, with the training accuracy reaching 100%. This highlights the potential of CNNs, especially when combined with advanced architectures, for efficient detection and analysis of complex food matrices. In conclusion, these techniques can be applied to food quality control and other detection and analysis domains.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have