Abstract

Improving Learning Performance in Neural Networks

Highlights

  • A large number of optimization research studies has been contacted frequently using efficient deep learning models, e.g., reinforcement learning [1], self-supervised learning [2], unsupervised learning [3], pruning of decision trees [4], swarm intelligence [5], etc

  • The networks technically adjusted using a back-propagation algorithm to observe errors in each node based on the difference between planned output and observed output

  • Improving the solutions constructed by nodes is the main challenge in neural network algorithms

Read more

Summary

Introduction

A large number of optimization research studies has been contacted frequently using efficient deep learning models, e.g., reinforcement learning [1], self-supervised learning [2], unsupervised learning [3], pruning of decision trees [4], swarm intelligence [5], etc. In our perspective, learning in a neural network could be optimized by increasing the pheromone level deposited for the transition from node to node in its space This technique could influence and control parameters toward the optimal solution. Unlike previous methods proposed by more recent researchers [6][7], the goal is not to deal with the process of learning, but rather it is important to deal with the outcomes of learning to better increase the performance of optimization. Association has been developed a technique for learning optimization algorithms for highdimensional random optimization issues, and just like the downside of practicing shallow neural networks. Changing the attributes of neural networks like weights and learning rates to reduce the losses and provide the most accurate results needs methods or algorithms.

Related work
The metaphor of learning
Stochastic gradient descent
Algorithm and formula
Findings
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call