Abstract

A large-scale, dual-network architecture using wafer-scale integration (WSI) technology is proposed. By using 0.8 mu m CMOS technology, up to 144 self-learning digital neurons were integrated on each of eight 5 in silicon wafers. Neural functions and the back-propagation (BP) algorithm were mapped to digital circuits. The complete hardware system packaged more than 1000 neurons within a 30 cm cube. The dual-network architecture allowed high-speed learning at more than 2 gigaconnections updated per second (GCUPS). The high fault tolerance of the neural network and proposed defect-handling techniques overcame the yield problem of WSI. This hardware can be connected to a host workstation and used to simulating a wide range of artificial neural networks. Signature verification and stock price prediction have already been demonstrated with this hardware.< <ETX xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">&gt;</ETX>

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.