Abstract
The detection of small targets in uncompressed imagery frequently incurs high computational cost due to area-based filtering and template matching processes. In particular, the convolution of a K-pixel filter with an N-pixel image typically requires work that is bounded below by O(KN). However, we have shown that such image-template operations can be computed in less than O(KN) time if the image is appropriately compressed. We call this technique compressive processing. In this two-art series of papers, we present supporting theory, derivations, and analyses of compressive image-template operations that frequently occur in automated target recognition practice. For example, compression ratios of 30:1 or greater have been reported for imagery when interframe differences are small. Similarly high compression ratios have been reported for video imagery using vector quantization (VQ) or visual pattern image coding (VPIC). We thus derive image operations such as edge detection and target classification that are applicable to VQ- and VPIC-compressed imagery, as well as to a VPIC-like transform, called Adaptive Vector Entropy Coding. In the case of edge detection and target classification over VQ- or VPIC-compressed imagery, we show that computational speedups of O(CR) can be obtained with appropriate data structural manipulation. For example, if VQ is employed with fixed-size, K-pixel encoding blocks, then edge detection can be achieved by entropy-based thresholding of the VQ codebook exemplars, at a cost of N/K block substitution operations. Given a codebook of size M vectors, an additional overhead of 2M comparisons may be required for validation purposes. A similar method is employed for VPIC, which encodes image patterns in terms of the encoding block mean, gradient intensity and orientation, and an index that references a bitmap pattern. In practice, the bitmap is derived from the encoding block's zero crossings about the block mean. Analyses emphasize performance measures such as computational cost, information loss, computational error, and compression ratio. Our algorithms are expressed in terms of image algebra, a rigorous, concise notation that unifies linear and nonlinear mathematics in the image domain. Since image algebra has been implemented on numerous sequential and parallel computers, our algorithms are feasible and widely portable.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.