Abstract

Fog computing offers low-latency and real-time big-data processing capabilities closer to the network edge. This particular benefit addresses the main bottleneck in a centralized cloud framework, which is, it cannot process latency-sensitive large video frames generated from the Internet of Things-based video surveillance cameras in real-time. Besides, the recent advancements in the computer vision field offer many state-of-the-art image processing capabilities that can be utilized for real-time surveillance data processing. Deploying those processing powers at several fog computing layers can bring novel solutions for computer vision-based real-time security solutions. This paper proposes a deep learning-based framework for smart video surveillance that can process the real-time frames on two consecutive fog layers, one for action recognition and the other for criminal threat-based response generation. The proposed architecture consists of three major modules. The first module is responsible for capturing surveillance videos by deploying RaspberryPi cameras in a distributed network. The second module is responsible for action recognition using a deep learning-based model installed inside NVIDIA Jetson Nano-devices placed on two fog layers. Finally, the security response is generated and broadcast to the law-enforcement agency. To evaluate the proposed model, experiments on semantic segmentation-based scene object recognition were run. The experimental results came up with a suitable recognition model that can be deployed in the fog layers of our proposed framework.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call