Abstract

Convolutional neural networks (CNNs) provide the sensing and detection community with a discriminative approach for classifying images. However, one of the largest limitations of deep CNN image classifiers is the need for extensive training datasets containing a variety of image representations. While current methods, such as generative adversarial network data augmentation, additions of noise, rotations, and translations, can allow CNNs to better associate new images and their feature representations to ones of a learned image class, many fail to provide new contexts of ground truth feature information. To expand the association of critical class features within CNN image training datasets, an image pairing and training dataset augmentation paradigm via a multi-sensor domain image data fusion algorithm is proposed. This algorithm uses a mutual information (MI) and merit-based feature selection subroutine to pair highly correlated cross-domain images from multiple sensor domain image datasets. It then re-augments the corresponding cross-domain image pairs into the opposite sensor domain’s feature set via a highest MI, cross sensor domain, and image concatenation function. This augmented image set then acts to retrain the CNN to recognize greater generalizations of image class features via cross domain, mixed representations. Experimental results indicated an increased ability of CNNs to generalize and discriminate between image classes during testing of class images from synthetic aperture radar vehicle, solar cell device reliability screening, and lung cancer detection image datasets.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call