Abstract
The present study advances object detection and tracking techniques by proposing a novel model combining Automated Image Annotation with Inception v2-based Faster RCNN (AIA-IFRCNN). The research methodology utilizes the DCF-CSRT model for image annotation, Faster RCNN for object detection, and the inception v2 model for feature extraction, followed by a softmax layer for image classification. The proposed AIA-IFRCNN model is evaluated on three benchmark datasets: Bird (Dataset 1), UCSDped2 (Dataset 2), and Under Water (Dataset 3), to determine prediction accuracy, annotation time, Center Location Error (CLE), and Overlap Rate (OR). The experimental results indicate that the AIA-IFRCNN model outperformed existing models regarding detection accuracy and tracking performance. Notably, it achieved a maximum detection accuracy of 95.62 % on Dataset 1, outperforming other models. Additionally, it achieved minimum average CLE values of 4.16, 5.78, and 3.54, and higher overlap rates of 0.92, 0.90, and 0.94 on the respective datasets (1, 2 and 3). Hence, this research work on object detection and tracking using the AIA-IFRCNN model is essential for improving system efficiency and fostering innovation in the field of computer vision and object tracking.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal of Cognitive Computing in Engineering
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.