Abstract

In the ever-expanding online fashion market, businesses in the clothing sales sector are presented with substantial growth opportunities. To utilize this potential, it is crucial to implement effective methods for accurately identifying clothing items. This entails a deep understanding of customer preferences, niche markets, tailored sales strategies, and an improved user experience. Artificial intelligence (AI) systems that can recognize and categorize clothing items play a crucial role in achieving these objectives, empowering businesses to boost sales and gain valuable customer insights. However, the challenge lies in accurately classifying diverse attire items in a rapidly evolving fashion landscape. Variations in styles, colors, and patterns make it difficult to consistently categorize clothing. Additionally, the quality of images provided by users varies widely, and background clutter can further complicate the task of accurate classification. Existing systems may struggle to provide the level of accuracy needed to meet customer expectations. To address these challenges, a meticulous dataset preparation process is essential. This includes careful data organization, the application of background removal techniques such as the GrabCut Algorithm, and resizing images for uniformity. The proposed solution involves a hybrid approach, combining the strengths of the ResNet152 and EfficientNetB7 architectures. This fusion of techniques aims to create a classification system capable of reliably distinguishing between various clothing items. The key innovation in this study is the development of a Two-Objective Learning model that leverages the capabilities of both ResNet152 and EfficientNetB7 architectures. This fusion approach enhances the accuracy of clothing item classification. The meticulously prepared dataset serves as the foundation for this model, ensuring that it can handle diverse clothing items effectively. The proposed methodology promises a novel approach to image identification and feature extraction, leading to impressive classification accuracy of 94%, coupled with stability and robustness.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.