Abstract

A Convolution Neural Network (CNN) is a popular tool in the domains of pattern recognition and machine learning. The performance of KCNN (kernel-based convolutional neural networks) is better than that of regular CNN. Although the KCNN can solve challenging nonlinear problems, doing so when dealing with a large-size kernel matrix is time-consuming and memory-intensive. The computational load and memory usage could be drastically decreased by adopting a reduced kernel strategy. But as the total amount of training data grows at an exponential pace, it becomes hard for a single worker to efficiently store the kernel matrix. Because of this, there can be no effective centralised data mining. In this research, we suggest the use of a distributed reduced kernel, or DRCNN, to train CNN using data that is stored in several locations. The data in the DRCNN will be spread out amongst the nodes at random. Static communication between nodes is defined by the network's architecture rather than the quantity of training data kept on each node. The DRCNN is an alternate direction multiplier (ADMM)-based distributed training technique, in contrast to the standard reduced kernel CNN. Experiments on the huge data set show that the distributed method may yield nearly the same results as the centralised algorithm, and it takes significantly less time. As a result, the amount of time spent computing is drastically reduced.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call