Abstract

This research proposes to develop a device that fuses deep learning-based object detection technology and measurement algorithms to collect real-time position data of objects moving in an area. The commonly used motion analysis devices are limited to measuring two-dimensional plane motion based on images. Therefore, physical quantities such as moments of inertia and rotation are difficult to obtain. Although systems that track, measure, and analyze the motion of objects are used in sports, they are expensive and specific to the field, making it difficult to use them widely in various fields of study. Thus, this research aims to overcome these limitations by developing device that analyzes the movement of objects in various three-dimensional spaces using two vertically positioned cameras, which can be used in a variety of environments with dramatically reduced costs. The proposed device consists of two main steps: the first step is to measure the three-dimensional position of an object with a camera connected to a single-shot multibox detector (SSD) trained with 1,465 image data, and the second step is to calculate the physical quantity and analyze the motion after visualizing the data of the measured object’s position and time in three dimensions. To measure the actual coordinates of the object, the diameter of the object is converted into pixels, and the distance in centimeters per pixel is obtained by utilizing the similarity ratio with the moving object captured by the camera. With the cm/px calculated, the pixel value obtained through camera is converted to cm, the actual coordinate of the object. To verify the accuracy of the proposed data, the acceleration of gravity was measured with the trend line of each parabola. The average error range of the acceleration of gravity calculated was ±0.20 m/s², which demonstrated great accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call