Abstract

Recent advances in optical sensor technologies and Geoinformatics, can support very large scale high definition, used for multispectral and panchromatic images. This capability allows the use of remote sensing for the observation of complex earth ecosystems. Application areas include, sustainability of biodiversity, precision agriculture, land, crops and parasites management. Moreover, it supports advanced quantitative studies of biophysical and biogeochemical cycles, in costal or inland waters. The requirement for precise and effective scene classification, can significantly contribute towards the development of new types of decision support systems. This offers considerable advantages to business, science and engineering. This research paper proposes a novel and effective approach based on geographic object-based scene classification in remote sensing images. More specifically, it introduces an important upgrade of the well-known Residual Neural Network (ResNet) architecture. The omission of some layers in the early stages of training, achieves an effective simplification of the network, by eliminating the “Vanishing Gradient Problem” (VGP) which causes efficiency limitations in other “Deep Learning” (DEL) architectures. The use of the Softmax activation function instead of the Sigmoid in the last layer, is the most important innovation of the proposed system. The ResNet has been trained using the novel AdaBound algorithm that employs dynamic bounds on the employed learning rates. The result is the employment of a smooth transition of the stochastic gradient descent, tackling the noise dispersed points of misclassification with great precision. This is something that other spectral classification methods cannot handle. The proposed algorithm was successfully tested, in scene identification from remote sensing images. This confirms that it could be further used in advanced level processes for Large-Scale Geospatial Data Analysis, such as cross-border classification, recognition and monitoring of certain patterns and multi-sensor data fusion.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.