Abstract

Deep convolutional neural networks are highly efficient for computer vision tasks using plenty of training data. However, there remains a problem of small training datasets. For addressing this problem the training pipeline which handles rare object types and an overall lack of training data to build well-performing models that provide stable predictions is required. This article reports on the comprehensive framework <i>XtremeAugment</i> which provides an easy, reliable, and scalable way to collect image datasets and to efficiently label and augment collected data. The presented framework consists of two augmentation techniques that can be used independently and complement each other when applied together. These are Hardware Dataset Augmentation (HDA) and Object-Based Augmentation (OBA). HDA allows the users to collect more data and spend less time on manual data labeling. OBA significantly increases the training data variability and remains the distribution of the augmented images being close to the original dataset. We assess the proposed approach for the apple spoil segmentation scenario. Our results demonstrate a substantial increase in the model accuracy reaching 0.91 F1-score and outperforming the baseline model for up to 0.62 F1-score for a few-shot learning case in the wild data. The highest benefit of applying <i>XtremeAugment</i> is achieved for the cases where we collect images in the controlled indoor environment, but have to use the model in the wild.

Highlights

  • In recent years deep convolutional neural networks proved to reach the state-of-the-art performance on computer vision (CV) tasks such as classification [1], semantic segmentation [2], object detection [3], domain adaptation [4], pose estimation [5], etc.In comparison with classic CV algorithms, deep learning (DL) models are trained rather than programmed [6]

  • We show the contribution of Hardware dataset augmentation (HDA) and Object-Based Augmentation (OBA) approaches separately in the segmentation task

  • The proposed approach is called XtremeAugment and consists of Hardware dataset augmentation (HDA), which performs during the data collection stage, and object-based augmentation (OBA), which performs after image labeling

Read more

Summary

Introduction

In comparison with classic CV algorithms, deep learning (DL) models are trained rather than programmed [6]. It makes them more flexible for different domains and requires less domain-specific knowledge from the developer because the model retrieves patterns directly from data. The drawback is that DL models heavily rely on data and require comprehensive training datasets [7]. Datasets cannot cover all the existing tasks for every specific problem, and a data scientist must prepare data for every new problem [16]. On average, data scientists spend up to 80% of time for data preparation [17]. We see that this gap reduces with time, and one can see the exponential increase in the number of researches focused on data according to Web of

Objectives
Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.