Abstract

İnterpreting the human posture in human videos and pictures constitutes the most basic structure of human posture prediction. A system is created that decides what the movement is and what purpose it is made by evaluating pictures and videos. In this way, a structure has been created that determines and classifies human movements as an automatic system. A mechanism of motional meaning contained in the created system has been recognized in such away that the pattern is expressed. It is intended to take advantage of these components by taking instant information. A result was obtained by primarily inferring instant still images and eliminating time intervals that do not contain information range. A classification was made according to their accuracy. Based on the location coordinates of the images and videos, it was tried to determine how people might react in the neck stage. Thanks to the analysis performed through the joints with optical flow calculation, motion information was obtained and classifications and analyses expressing the power of motion were created. Motion information on the region determined in the image is determined by the detection of joints, revealing the power generated by movement. The created histograms provide ease of classification of motion. Based on the reliability of the descriptions, which include the concept of the time in a sequential way with the detection of joints, it was desired to create a sliding classification mechanism within the framework of these joints. As a result of this study, it was aimed to obtain a functional structure that can recognize and understand the autonomous movement of stationary or moving beings. An efficient structure has been created in terms of providing a useful and facilitating mechanism by solving the problems in estimation.

Highlights

  • HeHuman posture prediction is a very important step to understand the actions of people in the video or pictures discussed

  • The Openpose system is aimed to realize a real-time system for 17 key points, with a total of 135 key points in the whole body in the images

  • Openpose has open-sourced the work in the library repository, enabling pipelines between command line interfaces Python API and Unity plugins and joints

Read more

Summary

Introduction

HeHuman posture prediction is a very important step to understand the actions of people in the video or pictures discussed. İn human motion detection studies, extracting images from videos and classifying the corresponding movements are generally fundamental It is based on how the system works as a whole by extracting features from the images and creating a classification obtained from these features. Disappearance of certain or all parts of the human body in the image, the change of camera angle and the focus area are the most basic factors that can make estimation difficult Situations such as timing, motion start and end can make a difference in every situation, as well as adversely affect the generalization situation in motion recognition, make classification difficult. Human posture positions was strengthened when a high-resolution mechanism was created with the detection of interactions between joints by graphic modeling These approaches have been found to perform well in appearance and in estimating moments of difference. The data on the accuracy of the study was shared and the efficiency of the study is shown

Material and Methods
Method
Conclusion
Results and Discussion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.