Abstract

Wide-area motion imagery (WAMI) sensors are placed on helicopters, balloons, small aircraft, or unmanned aerial vehicles and are used to image small city-sized areas at approximately 0.5 m/pixel and about one or two frames/s. The geospatial-temporal data sets produced by these systems allow for the observation of many dynamic phenomena that were previously inaccessible in street-level video data, but the efficient exploitation of this data poses significant technical challenges for image and video analysis and for data mining. Content of interest is defined in very abstract terms related to how humans interpret video imagery, but the data is defined in very physical terms related to the imaging device. This difference in representations is often called the semantic gap. In this review article, we describe advances that have been made and the advances that will be needed to produce the hierarchy of computational models required to narrow the semantic gap in WAMI.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.