Abstract

In recent days, unmanned aerial vehicles (UAVs) becomes more familiar because of its versatility, automation abilities, and low cost. Dynamic scene classification gained significant interest among the UAV-based surveillance systems, e.g., high-voltage power line and forest fire monitoring, which facilitate the object detection, tracking process and drastically enhances the outcome of visual surveillance. This paper proposes a new optimal deep learning-based scene classification model captured by UAVs. The proposed model involves a residual network-based features extraction (RNBFE) which extracts features from the diverse convolution layers of a deep residual network. In addition, the several parameters in RNBFE lead to many configuration errors due to manual parameter tuning. So, self-adaptive global best harmony search (SGHS) algorithm is employed for tuning the parameters of the RNBFE. The resultant feature vectors undergo classification by the use of latent variable support vector machine (LVSVM) model. The presented optimal RNBFE (ORNBFE) model has been tested using two open access datasets namely UC Merced (UCM) Land Use Dataset and WHU-RS Dataset. The presented technique attains maximum scene classification accuracy over the other recently proposed methods.

Highlights

  • Unmanned aerial vehicles (UAVs) fly at minimum altitudes to derive high-definition images where it covers only a smaller region [1]

  • This paper proposes a new optimal deep learning-based scene classification model captured by unmanned aerial vehicles (UAVs)

  • Lecun et al [4] sampled CNN structure with the help of backpropagation (BP) model trains the character recognition method by producing unique outcome. It is applied by the educational centers which results in CNNs that is caused as the enhanced interest in Support Vector Machine (SVM)

Read more

Summary

INTRODUCTION

Unmanned aerial vehicles (UAVs) fly at minimum altitudes to derive high-definition images where it covers only a smaller region [1]. Scene classifier provides localized information even at the presence of wider aerial images which consist of unambiguous semantic data of a surface. The models are classified into 3 divisions namely Low-level visual features, Mid-level visual presentation and high-level vision data. Low-level feature vectors can be named as visual parameters which could be filtered by locally or globally and applied to define the aerial scene images. A common work of pipeline obtains patches of local images and undergoes encoding the local cues, constructing a holistic mid-level presentation of aerial case. This paper proposes a new optimal deep learning-based scene classification model captured by UAVs. The proposed model involves a Residual Network-based Features Extraction (RNBFE) which undergoes parameter tuning using Selfadaptive Global best Harmony Search (SGHS) algorithm. The presented ORNBFE technique attains maximum scene classification accuracy over the other recently proposed methods

RELATED WORKS
PREPROCESSING
PARAMETER TUNING USING SGHS
LVSVM CLASSIFIER
EXPERIMENTAL RESULTS AND DISCUSSION
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call