• We proposed PixelMask: a novel data augmentation technique for adversarial robustness. • Extensive experiments showcase the effectiveness of the proposed defense algorithm. • Proposed PixelMask defense outperforms other strong data augmentation techniques. The vulnerability of deep networks towards adversarial perturbations has motivated the researchers to design detection and mitigation algorithms. Inspired by the dropout and dropconnect algorithms as well as augmentation techniques, this paper presents “PixelMask” based data augmentation as an efficient method of reducing the sensitivity of convolutional neural networks (CNNs) towards adversarial attacks. In the proposed approach, samples generated using PixelMask are used as augmented data, which helps in learning robust CNN models. Experiments performed with multiple databases and architectures show that the proposed PixelMask based data augmentation approach improves the classification performance on adversarially perturbed images. The proposed defense mechanism can be applied effectively for different adversarial attacks and can easily be combined with any deep neural network (DNN) architecture to increase the robustness. The effectiveness of the proposed defense is demonstrated in gray-box, white-box, and unseen train-test attack scenarios. For example, on the CIFAR-10 database under adaptive attack (i.e., projected gradient descent), the proposed PixelMask is able to improve the recognition performance of CNN by at-least 22.69%. Another advantage of the proposed algorithm over several existing defense algorithms is that the proposed defense either is able to retain or increase the classification accuracy of clean examples.