ABSTRACT Recently, the development of various remote sensing sensors has provided more reliable information and data for identification of different ground classes. Accordingly, multisensory fusion techniques are applied to enhance the process of information extraction from complementary airborne and spaceborne remote sensing data. Most of previous research in the literature has focused on the extraction of shallow features from a specific sensor and on classification of the resulted feature space using decision fusion systems. In recent years, Deep Learning (DL) algorithms have drawn a lot of attention in the machine learning area and have had different remote sensing applications, especially on data fusion. This study presents two different feature-learning strategies for the fusion of hyperspectral thermal infrared (HTIR) and visible remote sensing data. First, a Deep Convolutional Neural Network (DCNN)-Support Vector Machine (SVM) was utilized on the features of two datasets to provide the class labels. To validate the results with other learning strategies, a shallow feature model was used, as well. This model was based on feature fusion and decision fusion that classified and fused the two datasets. A co-registered thermal infrared hyperspectral (HTIR) and Fine Resolution Visible (Vis) RGB imagery was available from Quebec of Canada to examine the effectiveness of the proposed method. Experimental results showed that, except for the computational time, the proposed deep learning model outperformed shallow feature-based strategies in the classification performance that was based on its accuracy.