Abstract
Cooperative machine learning has many applications, such as data annotation, where an initial model trained with partially labeled data is used to predict labels for unseen data continuously. Predicted labels with a low confidence value are manually revised to allow the model to be retrained with the predicted and revised data. In this paper, we propose an alternative to this approach: an initial training process called Deep Unsupervised Active Learning. Using the proposed training scheme, a classification model can incrementally acquire new knowledge during the testing phase without manual guidance or correction of decision making. The training process consists of two stages: the first stage of supervised training using a classification model, and an unsupervised active learning stage during the test phase. The labels predicted during the test phase, with high confidence, are continuously used to extend the knowledge base of the model. To optimize the proposed method, the model must have a high initial recognition rate. To this end, we exploited the Visual Geometric Group (VGG16) pre-trained model applied to three datasets: Mathematical Image Analysis (AMI), University of Science and Technology Beijing (USTB2), and Annotated Web Ears (AWE). This approach achieved impressive performance that shows a significant improvement in the recognition rate of the USTB2 dataset by coloring its images using a Generative Adversarial Network (GAN). The obtained performances are interesting compared to the current methods: the recognition rates are 100.00%, 98.33%, and 51.25% for the USTB2, AMI, and AWE datasets, respectively.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.