Abstract

Abstract A parallel genetic algorithm is presented for 2-D object recognition and simulataneous estimation of object position, magnification and orientation from quantum-limited sensor data. Traditional approaches to this problem are based on matching a concise set of features (boundaries, corners, moments, etc.) from the sensor data to a corresponding set of model features. These approaches break down at low SNR due to a deluge of artifacts among the data features, and inconsistencies arising from the lack of optimal interaction between high-level and low-level vision processes. As a first step towards overcoming the above hurdles, this paper presents a drastic departure from conventional vision-based approaches that (i) avoids the computation of features from noisy data, and (ii) uses a synergistic interaction of high-level and low-level vision processes to avoid inconsistencies. The combined vision problem is posed as a large-scale global optimization over a single objective function that directly involves the sensor data, the noise model and object templates. The optimization is accomplished using a genetic algorithm that runs on a parallel computer with 40 Transputers. Experimental results are presented which demonstrate robust operation and high accuracy with quantum-limited (5–10 events/pixel) data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call