Abstract

Background/Objectives: In this progressive Hi-Tech ecosystem, the cuttingedge technologies in the Deep Learning techniques for Vehicle Detection and Classification engendered swift paradigm shifts in diverse operations through the deployment of convolutional neural models in the Traffic Surveillance System. The fundamental element of the Traffic management system constitutes a real-time dynamic image, which forms the base input for vehicle recognition systems. The deep model functionalities on these base static images are highly pragmatic, and a radical approach leads to its successful applicability. Methods: This study proposes Faster Region-based Convolutional Neural Network (R-CNN) technique for image-based vehicle detection with significant performance benefits. Essentially, the base network of a pre-trained deep model, fine-tuned VGG-16 is transformed into Faster R-CNN. At this stage, the framework is constructed for a customized finitecapacity vehicle dataset. Subsequently, it is applied to train and test the system. From the performance lens, for further system enhancement, the speedup Bottleneck, and Data Augmentation implementation improve training speed and accuracy. Findings: The Experiments demonstrate that the sensitivity factor is 93.5% which provides acceptable results of 87.6% with 0.42s in vehicle detection in aspects of accuracy and execution time. Novelty : For our customized dataset, the performance-enhanced detection framework shows an increase of 4% in sensitivity and 3.23s with respect to time as compared to the other existing models. The proposed research is designed for a novel Faster RCNN algorithm that is fine-tuned detection algorithm of vehicles integrating sophisticated approaches for dynamic transformation of the live traffic video stream recording by transposing these real-time traffic videos to image inputs to this optimized detection framework achieving a high sensitivity factor with an efficient computation stack benefiting cost and time. Keywords: Data Augmentation; Deep Learning; Faster RegionConvolutional Neural Network; Traffic Surveillance System; VGG16 pretrained model; Vehicle Detection

Highlights

  • In the prevailing scenario, object detection is the utmost complex task in the computervision domain

  • This study proposes Faster Region-based Convolutional Neural Network (R-CNN) technique for image-based vehicle detection with significant performance benefits

  • The base network of a pre-trained deep model, fine-tuned VGG-16 is transformed into Faster R-CNN

Read more

Summary

Introduction

Object detection is the utmost complex task in the computervision domain. The primary Region-Convolutional Neural Network has suggested slightly improved frameworks such as Mask R-CNN, Faster R-CNN, and Fast R-CNN, which achieve better performance, provide accurate results and make real-time object detection tasks and vehicle detection. It is a challenging method since there is room for occlusion and truncation of vehicles contributing to scale variations in traffic images. This inefficiency in the system has roots in the convolutional neural networks-based object detectors - SSD and Faster R-CNN. Our proposed approach focuses on modifying the base network to suit the dynamic positioning of different scales by implementing the multi-scale feature maps of CNN (1), alternatively using the input images consisting of multiple resolutions

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call