Abstract

The discrete-time cellular neural network (DTCNN) is a promising computer paradigm that fuses artificial neural networks with the concept of cellular automaton (CA) and has many applications to pixel-level image processing. Although some architectures have been proposed for processing DTCNN, there are no compact, practical computers that can process real-world images of several hundred thousand pixels at video rates. So, in spite of its great potential, DTCNNs are not being used for image processing outside the laboratory. This paper proposes a DTCNN processing method based on a highly parallel two-dimensional (2-D) cellular automata called CAM/sup 2/. CAM/sup 2/ can attain pixel-order parallelism on a single PC board because it is composed of a content addressable memory (CAM), which makes it possible to embed enormous numbers of processing elements, corresponding to CA cells, onto one VLSI chip. A new mapping method utilizes maskable search and parallel and partial write commands of CAM/sup 2/ to enable high-performance DTCNN processing. Evaluation results show that, on average, CAM/sup 2/ can perform one transition for various DTCNN templates in about 12 microseconds. Also it can perform practical image processing through a combination of DTCNNs and other CA-based algorithms. CAM/sup 2/ is a promising platform for processing DTCNN.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.