Detecting foreground objects in video is crucial in various machine vision applications and computerized video surveillance technologies. Object tracking and detection are essential in object identification, surveillance, and navigation approaches. Object detection is the technique of differentiating between background and foreground features in a photograph. Recent improvements in vision systems, including distributed smart cameras, have inspired researchers to develop enhanced machine vision applications for embedded systems. The efficiency of featured object detection algorithms declines as dynamic video data increases as contrasted to conventional object detection methods. Moving subjects that are blurred, fast-moving objects, backdrop occlusion, or dynamic background shifts within the foreground area of a video frame can all cause problems. These challenges result in insufficient prominence detection. This work develops a deep-learning model to overcome this issue. For object detection, a novel method utilizing YOLOv3 and MobileNet was built. First, rather than picking predefined feature maps in the conventional YOLOv3 architecture, the technique for determining feature maps in the MobileNet is optimized based on examining the receptive fields. This work focuses on three primary processes: object detection, recognition, and classification, to classify moving objects before shared features. Compared to existing algorithms, experimental findings on public datasets and our dataset reveal that the suggested approach achieves 99% correct classification accuracy for urban settings with moving objects. Experiments reveal that the suggested model beats existing cutting-edge models by speed and computation.
Read full abstract