Abstract

Inverse kinematics is a significant challenge in robotic manipulators, and finding practical solutions plays a crucial role in achieving precise control. This paper presents a study on solving inverse kinematics problems using the Feed-Forward Back-Propagation Neural Network (FFBP-NN) and examines its performance with different hyperparameters. By utilizing the FFBP-NN, our primary objective is to ascertain the joint angles required to attain precise Cartesian coordinates for the end-effector of the manipulator. To accomplish this, we first formed three input-output datasets (a fixed-step-size dataset, a random-step-size dataset, and a sinusoidal-signal-based dataset) of joint positions and their respective Cartesian coordinates using direct geometrical formulations of a two-degree-of-freedom (2-DoF) manipulator. Thereafter, we train the FFBP-NN with the generated datasets using the MATLAB Neural Network Toolbox and investigate its potential by altering the hyperparameters (e.g., number of hidden neurons, number of hidden layers, and training optimizer). Three different training optimizers are considered, namely the Levenberg-Marquardt (LM) algorithm, the Bayesian Regularization (BR) algorithm, and the Scaled Conjugate Gradient (SCG) algorithm. The Mean Squared Error is used as the main performance metric to evaluate the training accuracy of the FFBP-NN. The comparative outcomes offer valuable insights into the capabilities of various network architectures in addressing inverse kinematics challenges. Therefore, this study explores the application of the FFBP-NNs in tackling the inverse kinematics, and facilitating the choice of the most appropriate network design by achieving a portfolio of various experimental results by considering and varying different hyperparameters of the FFBP-NN.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call