Abstract

This study aims to develop an annotation and image annotation system using the Fashion MNIST dataset, which consists of 70,000 grayscale images of ten clothing categories. The system uses a long short-term memory (LSTM) network to generate captions and a convolutional neural network (CNN) to extract image features. Performance evaluation metrics such as Precision, Recall, F1 score, BLEU score, METEOR score, CIDEr score, and ROUGE-L score are used where the accuracy of each clothing category is calculated to evaluate the performance of the model across different categories. Visual analysis of the generated captions is performed to gain insight into the effectiveness of the model and potential areas for improvement. The results indicate the model's success in classifying clothing items, as evidenced by its high accuracy on the test set. The qualitative study reveals the model's ability to identify different types of clothing by providing relevant captions, where the feature representation layer (normalization) plays a crucial role in transforming the detected features. to a flattened row which is then passed to a fully connected layer to learn the relationships and make final decisions with the output layer using a softmax activation function to assign probabilities to each image class, with the class with the highest probability selected as the predicted image class.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.