These days, earth observation frameworks give a large number of heterogeneous remote sensing information. The most effective method to oversee such fulsomeness in utilizing its reciprocity is a vital test in current remote sensing investigation. Considering optical Very High Spatial Resolution (VHSR) images, satellites acquire both Multi Spectral (MS) and panchromatic (PAN) images at various spatial goals. Information fusion procedures manage this by proposing a technique to consolidate reciprocity among the various information sensors. Classification of remote sensing image by Deep learning techniques using Convolutional Neural Networks (CNN) is increasing a solid decent footing because of promising outcomes. The most significant attribute of CNN-based strategies is that earlier element extraction is not required which prompts great speculation capacities. In this article, we are proposing a novel Deep learning based SMDTR-CNN (Same Model with Different Training Round with Convolution Neural Network) approach for classifying fused (LISS IV + PAN) image next to image fusion. The fusion of remote sensing images from CARTOSAT-1 (PAN image) and IRS P6 (LISS IV image) sensor is obtained by Quantization Index Modulation with Discrete Contourlet Transform (QIM-DCT). For enhancing the image fusion execution, we remove specific commotions utilizing Bayesian channel by Adaptive Type-2 Fuzzy System. The outcomes of the proposed procedures are evaluated with respect to precision, classification accuracy and kappa coefficient. The results revealed that SMDTR-CNN with Deep Learning got the best all-around precision and kappa coefficient. Likewise, the accuracy of each class of fused images in LISS IV + PAN dataset is improved by 2% and 5%, respectively.