Abstract

A single hidden layer based Multi-Layered Feedforward Neural Network (MLFFNN) with an input layer and an output layer after appropriate training performs image compression with the hidden layer while the decompression is performed at the output layer with optimum weights and suitable bias values. The computational complexities of multi-layered architecture impose challenges in hardware implementation with the requirement of large number of multipliers, adders and memory elements for computation. Hardware circuits designed to implement Artificial Neural Network (ANN) architectures are referred to as Hardware Neural Networks (HNN). The implementation of HNN offers advantages in terms of area and power. The proposal explains the implementation of FeedForward Neural Network (FFNN) algorithm, and is realized using modified architecture that compresses and decompresses images that are obtained from sub-bands. The modified architecture is implemented using pipelined and parallel processing architecture and proposed design is modeled using Verilog and synthesized using Xilinx ISE and targeted to implement on Virtex-5 FPGA(Field Programmable Gate Array).

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.