Abstract

A system has been developed whereby active LADAR and passive electro-optic (EO) imaging data are registered in hardware at the pixel level. For the sake of discussion, the sensor is herein called a “LADAR/EO Fusion Sensor” or LEFS. The resulting fully aligned high-dimension feature vector enhances target recognition and permits dense point matching for precise image mosaicking. A significant benefit is in combining the ability of pencil beam active systems to work at long ranges and to penetrate obscurants with the passive array’s wide instantaneous field of view at increased resolution. One application in which this has had enormous benefit is in observation through partial or intermittent obscuration; e.g. with partial cloud cover or foliage. Reflected radiation associated with features within gaps in the obscuration are sensed passively while at the same time the active pencil beam efficiently maps structure within the revealed region. Pixel-level registration of data permits extended regions to be mapped by combining temporally or spatially diverse collections. This data is convenient for estimating the probability of detection and recognition in cluttered environments. Given knowledge of the nature of the clutter provided by the LEFS, the probability of target presence (prior and posterior) can be better estimated. A temporally evolving target detectability map can be produced and overlaid on a target expectation map to facilitate an understanding of the likelihood of false and missed detections. Methods for estimating the length of time for which a targeting decision can be deferred as well as for estimating the probability of resolving ambiguity in time are presented. Military applications for which this technology is being developed or assessed include precision tactical targeting, Precision Controlled Reference Image Base (CRIB) production and Automatic Registration of targeting data into the CRIB. Civil applications include 3D city modelling, real-time airborne mapping, postdisaster reconnaissance, floodplain and coastline mapping, drug interdiction target detection, environmental monitoring, and search and rescue. Introduction The probability of detection and identification of targets in cluttered environments is influenced by clutter density and the associated ability of a remote sensor to see through holes in clutter. When acquiring imagery from low-flying dynamic platforms such as small UAVs or ground-based sensors used by special forces, often only glimpses are possible though holes between trees or into steep canyons. For a given EO/IR image, only a limited number of ground patches might be visible. When a second image is coregistered with the first, the number and size of patches is increased. Given collection from enough viewpoints, it is theoretically possible to piece together enough patches to view a significant portion of the scene behind the clutter and discover otherwise obscured targets. Piecing things together accurately poses a major problem without an accurate determination of the fine details of the clutter geometry, i.e. the shape of trees and the geometry of the intervening holes. The compilation of this complex geometry through stereoscopic techniques is problematic when many points on the ground are only viewable through one hole seen from one vantage point only. This paper presents an approach to extracting EO data from multiple holes in clutter with the assistance of a range sensor such as LADAR that is fused at the pixel-level. The resulting 3D “patch data” provides an opportunity to characterize the viewability of targets in clutter and to compute the

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.