Presently, e-commerce platforms incorporate image search functionalities. Nevertheless, these systems possess constraints; input images necessitate static and manual cropping since the system does not automatically generate bounding boxes. Addressing this concern requires the implementation of an object detection algorithm to ascertain the quantity, location, and type of desired objects within real-time bounding boxes before users finalize their selection. This capability empowers users to readily discern their desired items, thereby augmenting the precision and efficiency of visual searches. Despite the availability of swifter object detection algorithms such as R-CNN and Mask R-CNN, which prioritize accuracy over speed, rendering them less suited for real-time detection, we opted to employ the YOLOv4 algorithm as an alternative, renowned for its efficacy in real-time object detection. Furthermore, we adopted the Color, Texture, and Edge-Based Image Retrieval (CTEBIR) technique for image matching. The results of our experimentation demonstrate that the utilization of the YOLOv4 algorithm can enhance the accuracy and speed of visual searches by streamlining the search process based on the identified classes. Additionally, our precision assessment yielded a score of 95%, with individual scores for camera objects reaching 90%, keyboards achieving 85%, and laptops attaining 71%. These findings corroborate the dependability of the CTEBIR algorithm in image matching and contribute to a deeper comprehension of the system's efficacy in accurately detecting and distinguishing objects.
Read full abstract