Abstract
The increased use of power- and space-constrained embedded processors in a wide variety of autonomous imaging and surveillance applications implies increased speed of computational resources that follow image acquisition in the processing stream. In such early vision applications, one typically processes an entire image data stream prior to spatial downselection operations such as focus-of-attention involving area-of-interest (AOI) selection. Downselection is especially useful in the emerging technologies of spatiotemporal adaptive processing (STA) and biomimetic automated target recognition (ATR). Here, progressive data reduction employs operations or sub-algorithms that process fewer data but in a more involved manner at each step of an algorithm. Implementationally, the STAP approach is amenable to embedded hardware using processors with deep pipelines and fine-grained parallelism. In part 1 of this series of two papers, we showed how compression of an image or sequence of images could facilitated more efficient image computation or ATR by processing fewer (compressed) data. This technique, called compressive processing or compressive computation, typically utilizes fewer operations than the corresponding ATR or image processing operation over noncompressed data, and can produce space, time, error, and power (STEP) savings on the order of the compression ratio (CR). Part 1 featured algorithms for edge detection in images compressed by vector quantization (VQ), visual pattern image coding (VPIC), and EBLAST, which is a recently-reported block-oriented high- compression transform. In this paper, we continue presentation of theory for compressive computation of early processing operations such as morphological erosion and dilation, as well as higher- level operations such as connected component labeling. We also discuss the algorithm and hardware modeling technique that supports analysis and verification of compressive processing efficiency. This methodology emphasizes 1) translation of each image processing algorithm or operation to a prespecified compressive format, 2) determination of the operation mix M for each algorithms produced in Step 1), and 3) simulation of M on various architectural models, to estimate performance. As in part 1, algorithms are expressed in image algebra, a rigorous, concise notation that unified linear and nonlinear mathematics in the image domain.© (2000) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.