Abstract

Object detection and tracking is one of the most emerging fields of computer vision that facilitates various fields including robotics, healthcare, security, autonomous vehicle systems, machine inspection, surveillance, and logistics. In object detection, many factors need to be considered including intrinsic and extrinsic factors, camera motion, deformation, occlusion, and motion blur. Machine learning (ML) and deep learning (DL) approaches are being adopted in object detection and tracking, and training these models is the key challenge achieving robust accuracy in the automated detection and tracking of objects. Data annotation paves the way to training ML and DL models; however, training models with inaccurate data jeopardizes the robustness of actual object detection and tracking. Towards generating 100% accurate datasets, human intervention is crucial for assigning identities to the correspondent objects throughout frames. In this paper, we utilize the OpenCV-based deep learning technique and introduce a framework that allows users to assign identities to detected objects towards generating flawless human-annotated ground truth data. The proposed framework allows the users to assign correspondence ids for bounding boxes on Tkinter GUI to help organizations prepare robust annotated datasets to train large-scale object-tracking models for object detection. In an extension of our study, we introduce a novel tool that will learn from human-annotated datasets and generate identities for the detected objects accurately. We evaluate our models on roughly 100 and 1000 human-annotated ground-truth datasets and later 5000 machine-generated ground-truth datasets. According to our demonstration, we achieved an accuracy of 97.55% and 96.68% respectively for human-annotated ground truth datasets. We also achieved an accuracy of 96.33% using a machine-automated ground truth dataset which indicates the robustness of our model. In future studies, we will extend our research to optimize proposed models to achieve an ultimate accuracy of 100%.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.