Abstract

With the advent of new technologies and advancement in medical science we are trying to process the information artificially as our biological system performs inside our body. Artificial intelligence through a biological word is realized based on mathematical equations and artificial neurons. Our main focus is on the implementation of Neural Network Architecture (NNA) with on a chip learning in analog VLSI for generic signal processing applications. In the proposed paper analog components like Gilbert Cell Multiplier (GCM), Neuron activation Function (NAF) are used to implement artificial NNA. The analog components used are comprises of multipliers and adders’ along with the tan-sigmoid function circuit using MOS transistor in subthreshold region. This neural architecture is trained using Back propagation (BP) algorithm in analog domain with new techniques of weight storage. Layout design and verification of the proposed design is carried out using Tanner EDA 14.1 tool and synopsys Tspice. The technology used in designing the layouts is MOSIS/HP 0.5u SCN3M, Tight Metal.

Highlights

  • Artificial Intelligence is implemented by using artificial neurons and these artificial neurons comprised of several analog components

  • The proposed paper is a step in the implementation of neural network architecture using back propagation algorithm for data compression

  • The training algorithm used is performed in analog domain the whole neural architecture is a analog structure

Read more

Summary

Artificial Intelligence

Intelligence is the computational part of the ability to achieve goals in the world. intelligence is a biological word and is acquired from past experiences. Artificial Intelligence is implemented by using artificial neurons and these artificial neurons comprised of several analog components. The proposed paper is a step in the implementation of neural network architecture using back propagation algorithm for data compression. International Journal of VLSI design & Communication Systems (VLSICS) Vol., No.2, April 2012 Figure 1 can be expressed mathematically as Figure 1: Neural Network a = f (P1W1+P2W2+P3W3+Bias) where „a‟ is the output of the neuron & „p‟ is input and „w‟ is neuron weight. The neural architecture is trained using back propagation algorithm and it is a feed forward network. The proposed neural architecture is capable of performing operations like sine wave learning, amplification and frequency multiplication and can be used for analog signal processing activities. 1.2 Multiple Layers of Neurons When a set of single layer neurons are connected with each other it forms a multiple layer neurons, as shown in the figure 2

Architecture
Back Propagation Algorithm
Results and Discussions
Simulation Result for Neuron activation function
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call