Abstract
In the current field of animation and gaming, the action collection cost for 3D animated character generation is high, and the accuracy of action recognition is poor. Therefore, to reduce the cost of generating 3D animated characters and improve the similarity between animated characters and real humans, a 3D action recognition and animated character generation model based on ResNet and phase function neural network is proposed. The experiment outcomes denote that the raised model begins to converge at 50 iterations, with a minimum loss value of 0.13. The convergence speed and loss value are better than other models. In human pose classification, the raised algorithm has the highest accuracy of 99.46 % and an average accuracy of 99.13 %. The highest classification precision and average precision are 97.79 % and 97.33 %, respectively. In terms of human pose orientation classification, the average accuracy and precision of the raised algorithm are 98.09 % and 97.41 %, respectively, which are also higher than other models. In addition, the mean per joint position error of the proposed algorithm is the highest at 80.1 mm and the lowest at 79.3 mm, respectively. The average recognition time for each image is only 46.8 ms, which is lower than other algorithms. In addition, the average update times of the algorithm and the Unreal Engine are 39.28 ms and 27.52 ms, respectively, and both run at different frame rates. The above results indicate that the proposed 3D human pose recognition and animated character generation model based on ResNet and phase function neural network can not only improve the accuracy of pose recognition, but also improve recognition speed, effectively reducing the cost of 3D animated character generation. The animation character generation method includes data collection and the application after data collection, which shows the various roles that deep learning technology can play in the field of computer graphics animation, and also provides excellent solutions for other computer graphics problems.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Similar Papers
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.