Abstract
In this letter, we investigated how to apply a deep learning method, in particular convolutional neural networks (CNNs), to an ocean front recognition task. Exploring deep CNNs knowledge to ocean front recognition is a challenging task, because the training data is very scarce. This letter overcomes this challenge using a sequence of transfer learning steps via fine-tuning. The core idea is to extract deep knowledge of the CNN model from a large data set and then transfer the knowledge to our ocean front recognition task on limited remote sensing (RS) images. We conducted experiments on two different RS image data sets, with different visual properties, i.e., colorful and gray-level data, which were both downloaded from the National Oceanic and Atmospheric Administration (NOAA). The proposed method was compared with the conventional handcraft descriptor with bag-of-visual-words, original CNN model, and last-layer fine-tuned CNN model. Our method showed a significantly higher accuracy than other methods in both datasets.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have