Abstract

The forward-looking ground penetrating radar (FLGPR) is a remote sensing modality that has recently been investigated for buried threat detection. The FLGPR considered in this work uses stepped frequency sensing followed by filtered backprojection to create images of the ground, where each image pixel corresponds to the radar energy reflected from the subsurface at that location. Typical target detection processing begins with a prescreening operation where a small subset of spatial locations are chosen to consider for further processing. Image statistics, or features, are then extracted around each selected location and used for training a machine learning classification algorithm. A variety of features have been proposed in the literature for use in classification. Thus far, however, predominantly hand-crafted or manually designed features from the computer vision literature have been employed (e.g., HOG, Gabor filtering, etc.). Recently, it has been shown that image features learned directly from data can obtain state-of-the-art performance on a variety of problems. In this work we employ a feature learning scheme using k-means and a bag-of-visual-words model to learn effective features for target and non-target discrimination in FLGPR data. Experiments are conducted using several lanes of FLGPR data and learned features are compared with several previously proposed static features. The results suggest that learned features perform comparably, or better, than existing static features. Similar to other feature learning results, the features consist of edges or texture primitives, revealing which structures in the data are most useful for discrimination.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call