Abstract

Automatic data augmentation is a technique to automatically search for strategies for image transformations, which can improve the performance of different vision tasks. RandAugment (RA), one of the most widely used automatic data augmentations, achieves great success in different scales of models and datasets. However, RA randomly selects transformations with equivalent probabilities and applies a single magnitude for all transformations, which is suboptimal for different models and datasets. In this paper, we develop Differentiable RandAugment (DRA) to learn selecting weights and magnitudes of transformations for RA. The magnitude of each transformation is modeled following a normal distribution with both learnable mean and standard deviation. We also introduce the gradient of transformations to reduce the bias in gradient estimation and KL divergence as part of the loss to reduce the optimization gap. Experiments on CIFAR-10/100 and ImageNet demonstrate the efficiency and effectiveness of DRA. Searching for only 0.95 GPU hours on ImageNet, DRA can reach a Top-1 accuracy of 78.19% with ResNet-50, which outperforms RA by 0.28% under the same settings. Transfer learning on object detection also demonstrates the power of DRA. The proposed DRA is one of the few that surpasses RA on ImageNet and has great potential to be integrated into modern training pipelines to achieve state-of-the-art performance. Our code will be made publicly available for out-of-the-box use.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call