Remote sensing plays a vital role in various industries, including smart cities, agriculture, and environmental monitoring, by capturing multispectral images of the Earth’s surface and analyzing its features. Accurate classification of these images is essential for extracting valuable information and making informed decisions. However, traditional image classification models, such as Convolutional Neural Networks (CNNs), often struggle to fully capture the complex spectral and spatial information inherent in multispectral images. Addressing this research gap, we propose FusionNet-Remote, a novel hybrid deep learning ensemble model that integrates CNNs with Random Forests (RF) to enhance the accuracy and robustness of remote image classification. The primary objective of this study is to develop a model that combines the strengths of CNNs for spatial feature extraction with the robust classification capabilities of RFs. Through comprehensive evaluations using LANDSAT data, FusionNet-Remote demonstrates superior performance compared to existing models, achieving an outstanding training accuracy of 99.8% and a testing accuracy of 98.7%. The model also exhibits the lowest training loss (0.0001) and testing loss (0.0043), significantly outperforming traditional models such as standalone CNNs, VGG16, and Inceptionv3. These results highlight the effectiveness of the hybrid ensemble approach in overcoming the limitations of conventional methods. This research contributes to the field by introducing an innovative ensemble technique that significantly improves remote image classification accuracy and reliability. The model’s high performance underscores its potential for critical applications such as land cover mapping, vegetation analysis, and disaster monitoring. Future work will explore the integration of more advanced CNN architectures and alternative fusion techniques to further enhance the model's performance and adaptability across different remote sensing scenarios.
Read full abstract