Abstract

Neural Network system for SoC is one topic in the world. Recently, SoC began to combine AI technique by Python. When developers consider Intelligent system on SoC, the co-design software and hardware are significant. In this case, small devices and embedded system are target system. For IoT, those techniques will grow up in the world. Those cases, avoiding premature convergence while keeping performance is a challenge in training the neural network, especially in a case of large NNs or a large number of training data. In this case, one problem is a huge memory and processing power. For tiny memory and processing system as SoC, we propose an improved particle swarm optimization (PSO) algorithm called the PSOseed2 algorithm for training NN. The PSOseed2 algorithm solves the premature convergence of the standard PSO (SPSO) algorithm by slightly modifying the velocity update function without adding many computational tasks to the SPSO algorithm. We evaluated this algorithm on field programmable gate array (FPGA)-based NN and software-based NN and trained these NNs with different PSO algorithms that are SPSO, PSOseed, PSOseed2, and dissipative PSO. Experimental results with different datasets confirmed that the NNs trained by the proposed PSOseed2 algorithm had better recognition rates and lower global learning errors than the NN trained by other PSO algorithms. In this talk, I would like to talk those relational research in the world and our results and discussion.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call