Abstract

AbstractArtificial neural networks are a mathematical abstraction that models the nerve cells and their connections in the biological brain. Their use to solve a wide range of nonlinear problems in the field of approximation, machine learning, and artificial intelligence may require significant computing power. This requires the search for approaches to more efficient use of hardware resources. The article describes an early stage in the development of an optimized neural network designed for use on hardware with constrained resources, but with the ability to run parallel algorithms. Circuits that meet these requirements are the field programmable gate arrays. In the research an approach is proposed to significantly reduce the amount of logic used by contextually switching the weight matrices and the matrices of the activation functions which will result in reducing the number of network layers. This suggests the ability to create more affordable and smarter devices without compromising performance significantly.KeywordsArtificial neural networkContextual switchingProgrammable logic

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call