Abstract
A neural-net-based architecture is proposed to perform segmentation in real time for mixed gray-level and binary pictures. In this approach, the composite picture is divided into 16*16 pixel blocks, which are identified as character blocks or image blocks on the basis of a dichotomy measure computed by an adaptive 16*16 neural net. For compression purposes, each image block is further divided into 4*4 subblocks and, similar to the classical block truncation coding (BTC) scheme, a one-bit nonparametric quantizer is used to encode 16*16 character and 4*4 image blocks. In this case, however, the binary map and quantizer levels are obtained through a neural net segmentor over each block. The efficiency of the neural segmentation in terms of computational speed, data compression, and quality of the compressed picture is demonstrated. The effect of weight quantization is also discussed. VLSI implementations of such adaptive neural nets in CMOS technology are described and simulated in real time for a maximum block size of 256 pixels.< <ETX xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">></ETX>
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.