Abstract
This work addresses the problem of extracting semantics associated with multiple, cooperatively managed motion imagery sensors to support indexing and search of large imagery collections. The extracted semantics relate to the motion and identity of vehicles within a scene, viewed from aircraft and the ground. Semantic extraction required three steps: Video Moving Target Indication (VMTI), imagery fusion, and object recognition. VMTI used a previously published algorithm, with some novel modifications allowing detection and tracking in low frame rate, Wide Area Motion Imagery (WAMI), and Full Motion Video (FMV). Following this, the data from multiple sensors were fused to identify a highest resolution image, corresponding to each moving object. A final recognition stage attempted to fit each delineated object to a database of 3D models to determine its type. A proof-of-concept has been developed to allow processing of imagery collected during a recent experiment using a state of the art airborne surveillance sensor providing WAMI, with coincident narrower-area FMV sensors and simultaneous collection by a ground-based camera. An indication of the potential utility of the system was obtained using ground-truthed examples.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have