Abstract

Over the past few decades, research on object detection has developed rapidly, one of which can be seen in the fashion industry. Fast and accurate detection of an E-commerce fashion product is crucial to choosing the appropriate category. Nowadays, both new and second-hand clothing is provided by E-commerce sites for purchase. Therefore, when categorizing fashion clothing, it is essential to categorize it precisely, regardless of the cluttered background. We present recently acquired tiny product images with various resolutions, sizes, and positions datasets from the Shopee E-commerce (Thailand) website. This paper also proposes the Fashion Category—You Only Look Once version 4 model called FC-YOLOv4 for detecting multiclass fashion products. We used the semi-supervised learning approach to reduce image labeling time, and the number of resulting images is then increased through image augmentation. This approach results in reasonable Average Precision (AP), Mean Average Precision (mAP), True or False Positive (TP/FP), Recall, Intersection over Union (IoU), and reliable object detection. According to experimental findings, our model increases the mAP by 0.07 percent and 40.2 percent increment compared to the original YOLOv4 and YOLOv3. Experimental findings from our FC-YOLOv4 model demonstrate that it can effectively provide accurate fashion category detection for properly captured and clutter images compared to the YOLOv4 and YOLOv3 models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call