Abstract

This article presents a remote-sensing image retrieval scheme using image visual, object, and spatial relationship semantic features. It includes two main stages, namely offline multi-feature extraction and online query. In the offline stage, remote-sensing images are decomposed into several blocks using the Quin-tree structure. Image visual features, including textures and colours, are extracted and stored. Further, object-oriented support vector machine (SVM) classification is carried out to obtain the image object semantic. A spatial relationship semantic is then obtained by a new spatial orientation description method. The online query stage, meanwhile, is a coarse-to-fine process that includes two sub-steps, which are a rough image retrieval based on the object semantic and a template-based fine image retrieval involving both visual and semantic features. This method is different from many other semantic-based remote-sensing image retrieval methods and is suitable for ‘scene matching’. Moreover, the scheme is distinctive in system design, spatial relationship semantic description, and method of combining and utilizing visual and semantic features.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call