Abstract

AbstractSerial multi-class contour preserving classification can improve the representation of the contour of the data to improve the levels of classification accuracy for feed-forward neural network (FFNN). The algorithm synthesizes fundamental multi-class outpost vector (FMCOV) and additional multi-class outpost vector (AMCOV) at the decision boundary between consecutive classes of data to narrow the space of data. Both FMCOVs and AMCOVs will assist the FFNN to place the hyper-planes in such a way that can classify the data more accurately. However, the technique was designed to utilize only one processor. As a result, the execution time of the algorithm is significantly long. This article presents an improved version of the serial multi-class contour preserving classification that overcomes its time deficiency by utilizing thread-level parallelism to support parallel computing on multi-processor or multi-core system. The parallel algorithm distributes the data set and the processing of the FMCOV and AMCOV generators to be operated on available threads to increase the CPU utilization and the speedup factors of the FMCOV and AMCOV generators. The technique has been carefully designed to avoid data dependency issue. The experiments were conducted on both synthetic and real-world data sets. The experimental results confirm that the parallel multi-class contour preserving classification clearly outperforms the serial multi-class contour preserving classification in terms of CPU utilization and speedup factor.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call