Abstract

In the process of human behavior recognition, the traditional dense optical flow method has too many pixels and too much overhead, which limits the running speed. This paper proposed a method combing YOLOv3 (You Only Look Once v3) and local optical flow method. Based on the dense optical flow method, the optical flow modulus of the area where the human target is detected is calculated to reduce the amount of computation and save the cost in terms of time. And then, a threshold value is set to complete the human behavior identification. Through design algorithm, experimental verification and other steps, the walking, running and falling state of human body in real life indoor sports video was identified. Experimental results show that this algorithm is more advantageous for jogging behavior recognition.

Highlights

  • As a sub-field of artificial intelligence technology, computer vision and deep learning have developed rapidly in recent years

  • This paper proposed a method combing YOLOv3 (You Only Look Once v3) and local optical flow method

  • The local optical flow method based on YOLOv3 is to localize the dense optical flow method on the basis of YOLOv3 algorithm to realize the purpose of saving running time and speeding up running speed

Read more

Summary

Introduction

As a sub-field of artificial intelligence technology, computer vision and deep learning have developed rapidly in recent years. Human behavior recognition technology based on video sequence has developed from the earliest traditional classification method based on manual design features to the current method based on deep learning automatic feature extraction. The former requires manual design of features and classification by classifier. The sparse optical flow method in the traditional optical flow method only calculates individual points or points with special significance in image recognition, while the dense optical flow method proposed by Gunnar Farneback [5] calculates the dense optical flow field of the entire image by calculating the motion translation model of each pixel on the image This method can obtain the feature of video image at pixel level through dense optical flow field, so the effect of human motion recognition based on it is obviously better than that based on sparse optical flow. Wang et al introduced a new continuous optical flow framework to capture pixel dynamics by representing a group of continuous RGB frames sequentially through

Zheng et al DOI
Algorithm Design
Experimental Design and Result Analysis
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.