Abstract

To collect full-labeled data is a challenge problem for learning classifiers. Nowadays, the general tendency of developing a model is becoming larger to be able to obtain more potential capacity to effectively predict unknown instances. However, imbalanced datasets still are not able to meet the needs for training a robustness classifier. A convincing guidance to extract invariance features from images is training in augmented input datasets. However, selecting a proper way to generate synthetic samples from a larger quality of feasible augmentation methods is still a big challenge. In the paper, we use three types of datasets and investigate the merits and demerits of five image transformation methods—color manipulate methods (color and contrast) and traditional affine transformation (shift, rotation, and flip). We found a common experiment result that plausible color transformation methods perform worse against traditional affine transformations in solving the overfitting problem and improve the classification accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call