Abstract

MRBP algorithm is a training algorithm based on the MapReduce model for Back Propagation Network Networks (BPNNs), that employs the data parallel capability of the MapReduce model to improve the training efficiency and has shown a good performance for training BPNNs with massive training patterns. However, it is a coarse-grained pattern parallel algorithm and lacks the capability of fine-grained structure parallelism. As a result, when training a large scale BPNN, its training efficiency is still insufficient. To solve this issue, this paper proposes a novel MRBP algorithm, Fine-grained Parallel MRBP (FP-MRBP) algorithm, which has the capability of fine-grained structure parallelism. To the best knowledge of the authors, it is the first time to introduce the fine-grained parallelism to the classic MRBP algorithm. The experimental results show that our algorithm has a better training efficiency when training a large scale BPNN.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call