Abstract

Object detection from apron surveillance video is facing enormous storage pressure and computing overhead. Large cloud server cluster is generally used and high-speed network bandwidth is required, also equipped with powerful GPUs for computing support. The design of hardware-friendly and efficient object detection model is challenging. This paper presents a compression method for outdoor apron surveillance videos, which is further combined with a lightweight detection model to make the inference process independent of GPU. First, the gray level variance of dynamic objects is leveraged to binarize the monitoring images, then an improved MobileNet-SSD algorithm is proposed. Moreover, int8 quantization is performed and bit operations are designed to eliminate the floating-point operation and it can simultaneously accelerate and compress CNN models with only minor performance degradation. Experiment results on a large-scale dataset containing 22k monitoring images demonstrate that the compression ratio of quantized image can achieve up to 21 times, combined with quantized model, the detection on apron surveillance images can reach nearly 25FPS in a pure CPU environment, the mAP is 86.83%, and the model size is compressed to 600 kb. Significantly reduced computational complexity can be applied to embedded devices.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call