Accurate land use/land cover (LULC) classifications using satellite data is a challenging task due to the limited spatial and spectral resolutions of different satellite data. Moreover, the challenge is more severe for LULC classification of mining regions as no standardized spectral signature is available for the detection of coal mining regions. Thus, the current study aims to design deep learning algorithms using fused data of three satellite sensors (LISS-IV, Landsat-8, and Sentinel-2A) for LULC classification of mining regions. The fused image was derived from three satellite sensors using a discrete cosine transform (DCT) with a spatial correlation approach. A comparative evaluation of deep convolutional neural network (DCNN) and deep neural network (DNN) models in LULC classification of mining regions is conducted. Moreover, the performances of the models with fused data are compared with the performance of the same model with individual sensor data. The study area chosen to execute the work is Jharia Coalfield, which comprises five key LULC types, viz. barren land, coal mining region, built-up area, water body, and vegetation. A total of 6000 image samples of 6 × 6 sizes and 216,000 pixels were used to train and validate the DCNN and DNN models, respectively. That is, the DCNN model uses the object dataset and DNN uses the pixel dataset for model training and validation. The DCNN model achieved high training and validation accuracies (99.8% and 99.2%), while the DNN model achieved relatively lower accuracies (85.3% and 81.8%). The study evaluates the performance of both models further by employing confusion matrix parameters to measure accuracy, error, precision, and recall for each class. The results reveal that the DCNN model consistently outperforms the DNN model, showcasing accuracy, error rates, precision, and recall ranging from 99.83% to 99.99%, 0.01% to 0.17%, 99.52% to 99.99%, and 99.40% to 99.99% on the training dataset, and 99.50% to 99.99%, 0.01% to 0.50%, 98.35% to 99.99%, and 98.33% to 99.99% on the validation dataset, respectively. In comparison, the DNN model demonstrates values ranging from 90.36% to 99.90%, 0.01% to 9.64%, 75.10% to 99.53%, and 66.99% to 99.99% on the training dataset, and 88.50% to 99.94%, 0.06% to 11.50%, 72.25% to 99.66%, and 62.50% to 99.99% on the validation dataset. These findings showed that the DCNN classification algorithm outperforms the DNN classification algorithm. Moreover, the comparative performances of the DCNN model with different datasets indicate that the model with fused images outperformed the model with individual sensor images.