Abstract

In deep neural networks the depth of network is specified by number of layers and neurons in each layer. These parameters are basically set through trial and error methods. During past few years deep networks have provided successful results for various categories of constrained optimization problems at the cost of high memory and computation. With this motivation, here we propose some valuable insights and observations towards depth aspects in deep learning networks. As the number of parameters in these networks are redundant in nature they are often replaced through subtle architectures. The number of neurons in each layer of a deep network are obtained automatically through various complex functions. The different parameters of network are obtained various regularizers by certain operations on network neurons. This provides a single coherent framework which optimizes memory and computation time thereby generalizing network architectures. The process reduces number of parameters upto an appreciable amount while improving network accuracy. This has provided superior results for several regression and optimization computational scenarios.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call