Abstract

With the development of automated driving, driving safety has become a hot topic in the field of automated driving. However, the existing methods mainly define living objects as obstacle handling and mostly focus on the safety of pedestrians, thus ignoring the safety of animal movement in automatic driving. In this paper, we propose an animal movement prediction model with the corroboration of object detection and pose estimation, which aims, to help the autopilot perception system judge animal pose more accurately. First of all, we preprocess the datasets, use a Convolutional Neural Network based object detection method to detect animal images, and then use the current mainstream 2D animal pose estimation method to obtain the key points of animal images, and divide the resulting data set into three categories: running, walking and waiting. After preprocessing, we use Resnet and MLP models to classify the pose. For Resnet model, we will evaluate the accuracy of the model on the test set during training and compare the accuracy of the last model to determine the model with the highest training accuracy. The final model has achieved an accuracy of up to 75% for running evaluation. For MLP model, we tend to use the rank of 13 specific bone points’ angles as the standard of pose classification. Three types of test sets will be put into three trained MLPRegressor model for training to get the classification we want. Finally, the accuracy of all three poses is more than 80%. Finally, we choose the model with the highest accuracy as our classification method. This work is expected to apply animal pose estimation and animal and pose classification to automated driving to improve driving safety.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call