We consider the realization of motion-extended array synthesis (MXAS), in the RF and acoustic sensing domains, for the purpose of achieving high resolution in imaging, direction finding (DF), and geolocation applications. The focus is on a method in which, like synthetic aperture radar (SAR), the aperture synthesized is much larger than any aspect of the physical system, but, unlike SAR, in which there is no reference waveform available to exploit. We show that arbitrarily large virtual arrays, limited only by antenna element field of view, can be created by moving one collection system away from another, sweeping out a 1-D or 2-D area. The virtual array so created is suitable for use in general signal environments, including complex scenarios with cochannel and/or ultraweak sources. In the MXAS processing construct, the temporally distributed array formation is cast into a form resembling standard array processing and affording the same degrees of freedom as a real array of the same size. A performance assessment comparing an MXAS implementation to a leading method for DF and geolocation shows orders of magnitude improvement for MXAS, and conformance to the Cramer–Rao lower bound for DF for an array, under the correct interpretation of motion extension. Finally, we discuss candidate application areas.
Read full abstract