Abstract

The path trajectory prediction of rapidly rotated ping pong ball after hitting plays a great role in the training of athletes and even in the competition. It can improve the level and efficiency of training. Therefore it is necessary to optimize the parameters of ping pong ball such as position, angle and speed after hitting to complete data collection. This study established a prediction model of the path trajectory of rapidly rotated ping pong ball after hitting and predicted the path trajectory using extreme learning machine (ELM) algorithm and back propagation (BP) neural network. The results were compared to find out the better algorithm. Moreover the two algorithms were improved. The result demonstrated that the improved EML algorithm could realize minimum error.

Highlights

  • With the development of society, the technical level of modern table tennis is improving, and the requirements on table tennis players become higher

  • The improved extreme learning machine (ELM) algorithm could completely satisfy the requirement on prediction of tactics of table tennis robot

  • The time of back propagation (BP) neural network and ELM algorithm on x, y and z axis could be obtained through the aforementioned algorithms

Read more

Summary

INTRODUCTION

FOR THE FORMULA OF MODEL ALGORITHM 3.1. ELM algorithm ELM algorithm was a rapid single hidden layer neural network training algorithm which was proposed by Huang et al [11,12,13,14,15], and its network structure and working principle is shown below. The weight value ββii between the n-th hidden layer and network output is connected by parameters of hidden node ppii and ttii. For addition-type hidden nodes, the hidden node output of the n-th hidden layer corresponding to sample x is GG(ppii, ttii, xx), and its expression is: GG(ppii, ttii, xx) = gg(ppii ∗ xx + ttii). In the activation function g: R→R, ppii*X refers to the inner product of sample x and inner weight vector aaii in RRmm. The expression of radial basis function (RBF) hidden node GG(ppii, ttii, xx) is: GG(ppii, ttii, xx) = gg(ttii‖xx − ppii‖). The N diverse sample data were approached with zero error through the single-hidden layer neural network which contains M hidden layer neurons, and the N diverse data samples {(ppii, eeii)}iNiN=1 ⊂ RRnn ∗ RRmm. the relation expression of ppii, ttii and (ppii, ttii), ii = 1.

The improved ELM algorithm
BP Neural Network
The Experimental Process of the Model Algorithm
Comparison Results of Algorithmic Models
Comparison Results of the Improved Classifier
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call